Site icon COOL BLIND TECH

The $1 Billion Company that’s Bringing Vision to Blind People

Retired U.S. Air Force officer Stephen Hamilton had seven failed corneal transplants and 18 surgeries before he finally accepted that he was going to lose his vision. Once he went blind, he was forced to retired from his job as a network engineer and architect.

While he was participating in a months-long training session designed to help low-vision and blind veterans learn how to navigate the world at the VA in Reno, Nevada, Hamilton had the chance to try out a small device that was supposed to help him read again. The wearable, called the OrCam MyEye, attached to his eyeglasses, and when he used his finger to point to a piece of printed text–a sign, the page of a book, a newspaper, a restaurant menu–it would use deep learning to parse the language and read it aloud to him via a tiny speaker.

Even though Hamilton is fully blind, he convinced the VA that he would be able to use the MyEye, which is primarily designed for people with low vision. On his first trip back to his room at the center, he stopped in front of his door (he knew where it was because he had learned to time how long it took to get there) and pointed at the sign. “Stephen Hamilton,” the device said to him.

“I started to cry because I realized it was quite possible I was going to have some independence,” Hamilton says.

Now, Hamilton doesn’t go anywhere without the device. And when the Israel-based company released a brand-new wireless version called the MyEye 2.0 in October 2017, Hamilton upgraded within the first month.

It’s safe to say that OrCam–which was founded in 2010 by computer scientist Amnon Shashua and entrepreneur Ziv Aviram and was valued at $1 billion this year, has changed Hamilton’s life. That’s primarily because of a clever, intuitive interface based on a gesture everyone understands: pointing. All users have to do is point at whatever they want the device to read; the camera identifies their hand, then takes a picture of the text and reads it. It’s so precise that you can point to a specific line on a page and it will start reading from that point. “We believe that pointing at something is the most natural thing a human does,” says Aviram, who serves as the company’s CEO. “Look at kids–they’ll point at something and say, What is this?”

Using the pointing gesture as its foundation, the MyEye can read text in most situations, letting users know if a page is upside down or if there’s not enough natural light. But it has a host of other uses as well: It can remember and identify 100 faces, recognize millions of products, store 150 additional objects like credit cards or go-to grocery items, and identify colors, which is particularly useful when users are picking out clothes in the morning. The MyEye determines what to read based on where the user points–something that could be difficult for fully blind users, though Hamilton says that he was able to use it after a few weeks of adjustment. There’s also an automatic mode for facial recognition that will announce any people that come into the user’s line of sight. The MyEye will tell users the time with a simple twist of the wrist, as if the user is looking at an imaginary wristwatch; to stop it from reading, users hold their hand out in a stop gesture.

“I imagine being blind as someone who’s disabled. But I don’t feel disabled anymore,” Hamilton says. “I feel freedom I never thought would happen for me.”

It took OrCam five years to develop the MyEye, including the gesture interface, which launched in 2015. During development Aviram says that he and his team spoke to hundreds of potential users, and 90% of them said that they wanted reading capabilities. This surprised him, so to further understand the demand, he tried to prevent himself from reading for one day. “After one hour, I understood why it’s important,” he says. “Just five minutes and you understand how much of our world is composed of written material.”

The rest of the MyEye’s features also come from user research. Some users requested colors, something Aviram also hadn’t thought about. One said, “I cannot recognize my wife from one meter away,” Aviram says, prompting the team to add in facial recognition. Many of these features are available in other products, but they tend to be single-function. Hamilton notes a host of other devices he’d received from the VA: a clunky text reader, a device to read bar codes, and another to identify colors. “It wasn’t all that practical,” he says. “I wasn’t going to be able to lug that big thing out to the mailbox and scan through my mail.”

The first version of MyEye also included a smartphone-sized computing pack that was connected to the camera via a cable. The MyEye 2.0 no longer requires this extra pack–the entire device is about the size of your finger and only weighs 0.8 ounces. Incredibly, all of this text-to-speech and facial and object recognition happens entirely within this tiny device without any help from the cloud. The MyEye uses deep learning algorithms that have been trained on millions of images of text and products. “It’s almost similar to a process of what a child will do when they learn new things,” Aviram says. “Expose it to a huge amount of data and sophisticated algorithms, and you can teach the device to identify different products, faces, and languages.” All of the processing is done offline in real time–a crucial component of protecting users’ privacy.

The device has given Hamilton, and tens of thousands of other users according to the company, a new kind of independence. “Going to a bookstore and sitting down with a book and reading it is a lot of fun,” Hamilton says. “Nobody knows I’m blind with the exception of my cane. When they do find out, they’re kind of floored, because I’m reading a regular book or a menu in a restaurant.”

The device comes with several hours of training, and it does take several weeks to get used to it, particularly how to orient your head so that the device can read for you. Hamilton’s biggest gripe with the product right now is that it only holds battery charge for an hour and a half to two hours, and then requires 40 minutes to charge. He’s gotten around that by carrying a battery pack around with him wherever he goes. He also hopes that one day he won’t need to wear glasses, and that the MyEye can somehow mount behind his ear.

It’s also not without snafus–Hamilton recalls wearing the original MyEye at a restaurant when the device mistakenly identified his waitress as a man–but he says these kinds of misidentifications haven’t happened to him since the second version was released.

OrCam’s potential users aren’t just low vision and blind people. It could also be useful for people who are dyslexic or have other reading disabilities, or senior citizens who tire of reading quickly. So far, OrCam says it has sold tens of thousands of devices in 23 countries and operates in 18 languages–the team is currently working on East Asian languages.

Meanwhile, Aviram is looking toward the future of this type of gesture-based interface. The MyEye 2.0 includes a microphone “in preparation for a more advanced interface,” he says. He envisions integrating a voice assistant along with the gesture-based system so that users can talk to the device in a more natural way. For instance, users could hold up a document and ask the MyEye what it is. If it’s their phone bill, users could then ask the MyEye what the total charge is, rather than having to ask the device to read the entire page, or point to random places hoping to find the right piece of information. This type of combination gesture-voice interface would allow users to ask the MyEye to just read them the headlines, and then ask it to read the full article for a particular story.

For now, the MyEye can’t give users more information about the scene in front of them, to the chagrin of some users. When OrCam sales director Rhys Filmer gave a demo of the device, he said that some people wish it could describe the scene in front of them, in essence, they want it to replace their vision entirely. But the technology is not quite there yet, and that would require the MyEye to connect to a smartphone or to the cloud, which raises privacy concerns.

For now, the MyEye points to the future of gesture-based interfaces, where we use our voices and our bodies to communicate effortlessly with a smart assistant that translates and responds to the world for you. And while this kind of technology can’t help the blind see again, it can help people like Hamilton more effectively navigate a world that wasn’t designed for them.

Source

Exit mobile version