Site icon COOL BLIND TECH

This AI backpack system could help visually impaired people safely navigate

CBT Logo

CBT Logo

Intel Engineers have developed a voice-activated wearable system that can track obstacles in real-time and describe a person’s surroundings.

What does the visual system look like?

This new visual assistance system is made up of several components that can be worn without much more bulk than you might already be wearing whenever you leave the house. It’s just a vest or fanny pack, a backpack and a pair of earphones. The team says that hiding the electronics was a key goal, so users don’t look like cyborgs walking down the street.

How does it work?

The vest or fanny pack are packing a series of cameras, a 4K camera that provides colour information, and a pair of stereoscopic cameras that map depth of field. This visual information is then fed to the brains of the operation, stashed in the backpack.

There, a computing unit like a laptop or Raspberry Pi runs an AI interface called OpenCV’s Artificial Intelligence Kit with Depth (OAK-D), which uses neural networks to analyze the visual data. It also contains a portable battery that provides up to eight hours of use, and a USB-connected GPS unit.

The crunched visual data is then relayed via Bluetooth to a pair of earphones, letting the user know what’s around them. It can warn of obstacles of different shapes, sizes and types, and declare where they are in relation to the user, using descriptors like front, top, bottom, left, right and centre.

What features are included in the system?

The system can inform someone walking down the street that they’re approaching a trash can on their “bottom, left” or a low-hanging branch with “top, centre.” Tripping hazards like curbs or stairs can be spotted as changes in elevation, and the system can even recognize key things like Stop signs or crosswalks as they approach a corner.

Users can also issue voice commands to ask for more information. Saying “describe” will make the system respond with a list of what’s around them and where, such as “car, 10 o’clock,” “person 12 o’clock,” and “traffic light, 1 o’clock.”

Specific places can also be saved for future reference with commands like “save location, coffee shop.” Later, when you want to get back there, the user can say “locate coffee shop” and the system will give directions and say how far away it is.

Is there an expected release date?

It’s still early days, but the team hopes to fast-track the system by making the project non-commercial and open source.

The system can be seen in action in the video below.

Source: Intel

Exit mobile version