Microsoft Edge Can Now Auto-Generate Image Descriptions for Improved Accessibility

Microsoft has announced that Edge can now automatically generate alternative text for images on the web.

According to a new report from The Verge, Edge will now be able to ensure that screen readers can describe the contents of images even when those images don’t have alt text included. For blind or otherwise visually impaired users who want to know the contents of an image but can’t see it, this tool should help.

Previously, if your vision was blurred the main and common option that many people were inclined to use was the screen reader. However, the challenge with this is that if images on the web missed the alternative text it would be impossible for the screen reader to interpret the image and give an accurate description.

Microsoft reiterates that these auto-generated labels still aren’t as good as page authors providing alt text themselves, as the author is likely to know more about the specific contents of an image than anything an educated guess by Edge can provide.

Still, given that “more than half of the images processed by screen readers are missing alt text,” according to the company, this solution will hopefully help bridge some of the gap between web-based imagery and total accessibility.

“Modern image-recognition technology can help make things easier. When a screen reader finds an image without a label, that image can be automatically processed by machine learning (ML) algorithms to describe the image in words and capture any text it contains.

The algorithms are not perfect, and the quality of the descriptions will vary, but for users of screen readers, having some description for an image is often better than no context at all,” the company explains.

Users who want to try it out right now can head over to the accessibility settings in Microsoft Edge and look for an option called “Get image description from Microsoft for screen readers.”

Microsoft also encourages website owners to provide better alt text for images, therefore making it easier for Edge and screen readers to figure out what each image is all about.

P.S. Help support us and independent media here: Buy us a beer, Buy us a coffee, or use our Amazon link to shop.