Site icon COOL BLIND TECH

ORBIT Project is Recruiting Blind and Low Vision People to Build Better AI

CBT Logo

CBT Logo

The City-led Object Recognition for Blind Image Training (ORBIT) research project is recruiting blind and low vision users to record videos of objects that are important to them. The objects will be used in the training and testing of artificial intelligence (AI) models for personalizing object recognition.

Microsoft AI for Accessibility is funding the Object Recognition for Blind Image Training (ORBIT) project, led by City, University of London’s Dr Simone Stumpf.

Currently, the project is recruiting blind and low vision users in the UK to record videos of things that are important to them. The collected video data will enable the team to construct a large data set from users who are blind or have low vision which can be used for training and testing AI models that personalize object recognition and ultimately help build better AI for everyone.

For this purpose, the project team, comprising researchers from City, Microsoft Research and University of Oxford, have built the ORBIT iPhone app for collecting the videos, including guidance to users on how to film the things that they want to have recognized.

Collecting this video data from users who are blind and low vision is, as
Dr Stumpf notes, a “tricky process,” because it, “must be simultaneously easy for blind users to record the videos and the data must be useful for machine learning.”

Pilot study

Experience from a pilot study showed users were able to take videos in different settings in their home using different filming techniques. We found that common items that were videoed were their own white canes, keys, glasses, remote controls, bags, and headphones.

What is the ORBIT project about?

Novel smartphones apps using Artificial Intelligence (AI) are really useful in making visual information accessible to people who are blind or low vision. For instance, Seeing AI or TapTapSee allow you to take a picture of your surroundings, and then they tell you what things are recognized, for example, “a person sitting on a sofa”. While AI recognizes objects in a scene if they are common, at the moment these apps can’t tell you which of the things it recognizes is yours, and they don’t know about things that are particularly important to users who are blind or low vision.

Using AI techniques in computer vision to recognize objects has made great strides, it does not work so well for personalized object recognition. Previous research has started to make some advances to solving the problem by looking at how people who are blind or low vision take pictures, what algorithms could be used to personalize object recognition, and the kinds of data that are best suited for enabling personalized object recognition. However, research is currently held back by the lack of available data, particularly from people who are blind or low vision, to use for training and then evaluating AI algorithms for personalized object recognition.

The data set will be made publicly available for download in two phases: Phase 1 will include about 100 users in the UK and 1000s of videos, while Phase 2 will gather data on a global scale from about 1000 users and contain more than 10,000 videos.

For anyone interested in taking part in the Phase 1 study, please contact Lida Theodourou at Lida.Theodourou.2@city.ac.uk

For more information on the study and how to sign up, visit http://orbit.city.ac.uk

Exit mobile version