Site icon COOL BLIND TECH

Audiovisual Smart Glasses for the Blind That Can Read a Room

CBT Logo

CBT Logo

Technological advancements have created new opportunities for Spanish start-up to enhance the quality of life for people with various disabilities.

Eyesynth has designed a pair of glasses which work as an audiovisual system for the blind and visually impaired. The device is connected to a microcomputer and it records the surrounding environment in three dimensions. This is translated into intelligible audio that is then sent to the person wearing the glasses.

The technology involved was developed and designed by the start-up itself. The glasses have three core features. They work in full 3D, which means that they allow the user to identify shapes and spaces. But also the ability to sense depth and locate objects accurately.

There are no words involved during this process, the sound is completely abstract. It is a new language that the brain is able to assimilate, and it is apparently easy to learn, according to the company.

The sound is transmitted through the bones of the head. This frees up their ears for their regular range of hearing. This ensures that any subsequent listening fatigue is avoided.

Because the brain is capable of assimilating this information process fairly quickly, the blind person can soon wear the glasses and is able to focus on conversations or any other activities.

“The fundamental design premise was that we had to create a system which felt very natural in use. That’s why we had to rely on existing mechanisms that are available in nature. The technical principle which we base it on, is that of “synesthesia”, which means “crossed senses”. When we are born, absolutely everyone is 100% synesthetic. This means that we can smell sounds, taste colors, hear images and all kinds of mishmashes of senses. For practical reasons, the developing brain disconnects certain combinations and only keeps those that are most useful within our environment,” said Antonio Quesada, CEO of Eyesynth.

“The curious fact is that up to 14% of the population has some kind of light synesthesia. Often people who have it don’t realize it, as they assume it is a natural process. In my case, I am slightly synesthetic when it comes to music and images. For me, every sound has a concrete shape in my imagination,” said Quesada. “Ever since I was a child, I am able to remember complex sequences of music thanks to the shapes that they form in my mind. The “Eureka! moment” came when I asked myself the following question: what would happen if I were to reverse this process? I mean, if I extract real geometric data from the environment and turn it into sound, could a person instinctively interpret that? The immediate answer is yes.”

“From the start, our main goal has been to focus on the issue of navigating and recognizing the environment,” explains Quesada. “We have succeeded in designing a system that does not use words, but a sound signature similar to the sounds of ocean waves that changes its shape according to what the glasses’ cameras record. As no actual words are used, there is no language barrier.”

The system offers analysis of an amplitude radius of 120° up to a distance of 6 meters, updating data 60 times per second. This means a lot of information is available in real time. The smart glasses cover areas that a cane or guide dog cannot cover. As in obstacles up in the air as opposed to those on the ground, e.g., awnings, traffic signs or tree branches.

Having developed the navigation system, the company plans to expand the system with software functions such as facial recognition, text recognition and a lot of other new features that will roll out over time.

What makes EyeSynth different from other similar startups?

Our technology is radically different from other offers on the market. We don’t base our recognition system on spoken language but take advantage of the power of the user’s brain to interpret the environment. It’s a real-time system, so response is immediate,“ said Quesada. “On the other hand, the acoustic system we use is cochlear. We transmit the sound through the skull directly to the cochlear nerve. With this, we avoid having to cover the ears with headphones or earbuds. Plus, we eliminate auditory stress during lengthy listening sessions.

The image-to-speech algorithm is complex, and a massive quantity of data is required in order to be able to process it. This invariably leads to a huge amount of energy and computational power being used. The company had to develop their own hardware capable of doing these high speed calculations while using a low level of energy.

Each week, the company has a day allocated to visitors who want to test prototypes, some from other continents who come over just to test the technology.

The company is currently busy with the manufacturing process as well as arranging distribution channels. Eventually, the company wants to become the technological mobility standard for blind people and to create a solid community that shares their experiences.

“The response we have received so far from people who have tried our smart glasses has been fantastic,” said Quesada. “We are amazed at the human being’s ability to adapt to our technology. Users acquire a level of performance and accuracy that never ceases to amaze us. This is largely one of the reasons why we are moving forward in our mission to bring this technology to as many people as possible.”

Source

Exit mobile version