At the Machine Learning @Scale conference that Facebook is hosting today in New York, Facebook is announcing that it s enhancing the machine-generated photo descriptions that are available in screen readers like VoiceOver for iOS.Now the descriptions — which you can check out without a screen reader using your browser s Inspect Element feature on the web — will take into account the action that s captured.
For example, if a group of people are seen drumming in a photo, the caption might specify that people are playing instruments, as opposed to simply mentioning people and drums.
Specifically there are 12 new actions that can be included in automatic-alt text for photos, Facebook director of applied machine learning Joaquin Quiñonero Candela wrote in a blog post.That might not sound like a big change.
But for people who lean heavily on screen readers to gather information about what s happening — blind people, for example — the change will likely help them get a better understanding of what their Facebook friends are sharing in the News Feed.
After all, the text that people include alongside the photos they post doesn t always perfectly illustrate what s going on.Facebook first introduced automatic alt-text last April.
It s one of several ways Facebook is drawing on artificial intelligence.