This post has been republished via RSS; it originally appeared at: Microsoft Research.
A common goal in computer vision research is to build machines that can replicate the human vision system (for example, detect an object or scene category, describe an object or scene, or locate an object). A natural grand challenge for the artificial intelligence community is to design such technology to assist people who are blind to overcome their real daily visual challenges.
In this webinar with Dr. Danna Gurari, Assistant Professor in the School of Information at University of Texas at Austin, and Dr. Ed Cutrell, Senior Principal Researcher in the Microsoft Research Ability Group, learn how computer vision researchers are working to create vision systems adapted to the needs of those who use them. By creating new dataset challenges, the researchers aim to empower the artificial intelligence community to work on real use cases.
To encourage the larger artificial intelligence community to collaborate on developing methods for assistive technology, we introduce the first dataset challenges with data that originates from people who are blind. Our data comes from over 11,000 people in real-world scenarios who were seeking to learn about the physical world around them. More broadly, this dataset serves as a great catalyst for uncovering hard artificial intelligence challenges that must be addressed to create more robust systems across many contexts and scenarios.
Together, we’ll explore:
- Creating tools for people who are blind or have low vision that match their needs and complement their capabilities
- Key challenges of teaching computers how to automatically describe pictures taken by people who are blind or low vision
- Several potential solutions to make computers more accurately address the needs of people who are blind or low vision