Google held their Google I/O 2017 and announced some very exciting developments; not just for companies or advanced users, but for everyone. Unlike most years, this year saw changes implemented across all platforms and devices, and a greater commitment from Google than ever before to the concept of Machine Learning. So what do all of these changes mean for us as consumers, and how can we expect to see accessibility implemented?
AI for Everyone
Since the launch of the Google Home, people have relied more and more on voice commands to do daily tasks. Often times, when I ask my assistant to do something, I’m conversational about it, and that’s what Google expects. This year, they announced support for multiple user profiles just based on voice, and the ability to customize each profile right down to your daily news sources. At I/O, Google announced that the assistant is going to get even smarter and more intuitive. Is this even possible?
In fact, it is possible. Previously, the assistant could only interact with you if you had either a Google Home or Android device. Along with the I/O, Google launched the premier of the Google Assistant app for iOS. Now you no longer have to rely on Siri for those search queries and hope it will get the right results. Google Assistant will work across all your favorite devices. This could really come in handy if you’re watching a movie on your TV and need to go out. You could just ask your assistant to pick up where you left off, and Google will play your movie from the exact spot you stopped. Here’s my favorite part about these new features, the more you teach it, the easier and more freely conversation will flow. If you’ve ever searched on Google, or used a Google Home, you would have noticed that the assistant is sensitive to the context you’re searching in and things you search for. For example, if I search for dog parks, it won’t give me parks that are too far from me because it knows I walk or take the bus.
That’s all great, but what if you’re riding the bus and can’t use that handy assistant? Now, noisy environments, loud crowds, or even prying ears aren’t a problem. Just type your question into the integrated text box, and the Google Assistant will respond directly to your commands. This feature is being implemented on all forms of the Assistant.
Computer Vision Meets Intelligence
Not only is the assistant getting smarter and more intuitive, but it’s gaining another sense. In a few months, Google Assistant will be capable of interacting with your visual environment through the Google Lens. This is not new technology, but an improvement on existing constructs. Think Google Goggles. This lens will provide another way for users to interact with Google, and for Google to be more integrated with daily life. The example they gave at the I/O demonstrated that Google Lens could be used to identify signage in a foreign language.
So how can that help someone who is blind?
There have been a number of apps that have allowed people with vision impairments to interact with the world around them. Tap Tap See, for instance, uses crowd sourced data along with artificial intelligence to provide users with information about their environment or objects. KNFB Reader can scan images and read out the text. But what if all of these tasks could be handled in one place? Google Lens just might be that solution. During I/O 2017, Google announced that Lens will be a part of Google Assistant and Photos and can interface with all kinds of apps. For example, one could use the Lens to scan business cards, and Google can organize them so that you can view them later. In another demonstration, the Lens was used to read information from a screenshot. For those of us using screen readers, this could be extremely helpful. Let’s say, for instance, that my friend took a picture of a sign for a business and sent it to the group. I no longer have to be that awkward person who asks for the text. With Google Lens, it will be as simple as activating the Lens from within Google Photos, where I can find out information like location, phone number, and hours. Don’t know the password to your friend’s network? No problem! Lens can extract it from the router as long as it can see the writing. In navigation, if I’m out and about, Lens with maps can help me pinpoint details about places I’ve been, or would like to go. This could be extremely helpful if I’m visiting a store using a Tango capable device, where Lens could identify the location of products on shelves based on information provided by the store or by other users. (More on Tango later, as it is one of Google’s virtual reality efforts.) The possibilities for Lens are endless! Google’s knowledge base, together with user provided input, make it possible to create an amazingly intuitive cloud-based computer vision system such as the Google Lens. The implications for accessibility are incredible, especially for those who are blind or vision impaired.
Free calling in the U.S. Or Canada
Assistants are getting smarter and smarter, the end user is relying more and more on apps, gadgets and smart devices to control their daily lives, but what about those who aren’t quite as tech savvy? Is there a communication gap? Google doesn’t believe there needs to be. Among the many exciting developments happening in the coming months of 2017 is the implementation of Google Calling into all Assistant capable devices. This means that anyone with a U.S. Or Canadian phone number can make calls for free to any phone number between these two countries. This means my mom will be able to talk to me hands free from my Google Home without any extra work on her part. No apps to download, services to get, etc. All she has to do is pick up the phone.
How will Google know whose making the call?
Since Google Assistant already knows who you are by your voice, it makes sense that calling would work the same way. So, if I ask Google Home to call my mom, it’ll look through my contacts and find her. If my husband asks, it’ll look through his contacts and find his mother. No more hunting through your phone with messy hands, or trying to make calls while doing dishes without dropping that super expensive phone.
A look at Android O
Just when you thought Android was as awesome as it could be, they made it better with the unveiling of Android O during I/O 2017. A few key improvements that I noticed as an outsider are the addition of what Google calls Vitals, Play Protect, and Android Go.
What are Vitals?
Vitals allows the user to see important information right from the start. Items such as battery life, notifications, and system messages will all be displayed in one convenient area. It is hoped that Talkback will see some integration here. In addition to displaying this vital information, Android has made improvements internally as well to maximize system resources, including battery life.
What is Play Protect?
It’s often been said that Android lacks some security, especially when it comes to apps. This year, however, Google has implemented a new feature which focuses on fixing this problem. Play Protect will scan user apps for problems and remove those apps which have been removed from the Play Store. On the developer side, Google has announced a tool which allows developers to scan their apps for issues before they are released. If Google finds issues, resolutions are suggested. Will this make Google Play even more secure? Who knows, but at least Google is proactively involved in making sure their users are as secure as they can be in a platform where development remains very open.
Android Go, the efficient way to be mobile
As users of assistive technology, it’s often recommended to get the largest phone, the most data, the greatest amount of memory. But what if your resources are low? Maybe you’re a high school kid with your first phone, or perhaps you live on a tight budget and your phone bill is last on the list. Google has what they believe to be the solution to this difficulty. Android Go is a lighter, more portable, more efficient version of Android O. Android Go is for those devices where data is, as they stated in the keynote, “a currency”. Android Go automatically comes with Data Saver turned on and displays data used any time an app draws from your mobile connection. Google will also provide a special section in the Play store just for apps that have been optimized for Go. One app that has already been created is YouTube Go, where users can see just how much data videos will use before they are viewed.
Navigation for all occasions with improvements to Tango
Tango, Google’s experimental virtual reality platform, was released in 2016 and has only been available on select devices. In 2017, Google has announced that the selection of devices that will include Tango will increase, and that they may be introducing more portable brands. Just because the phone runs advanced software doesn’t mean it needs to take up a ton of space. In addition, Tango will include Google Lens in the near future. This will allow users to interact directly with their indoor surroundings through the Google Assistant. For those who are blind or visually impaired, Lens can obtain information about things like items on shelves, signage, or contact information for specific locations you’re trying to map out. Tango has been shown to work well as a tool for indoor navigation for the blind, in conjunction with apps specifically designed to work with this software. Google called this VPS, which stands for “visual-positioning system,” and it forms the basis of one aspect this VR platform could do. Tango was also demonstrated in indoor navigation types of settings, so Google recognizes the implications that Tango has for this purpose and has expanded upon it.
Organize your life with Google Photos
The theme this year at I/O 2017 was definitely AI First, and it extended all the way to Photos. If it isn’t strange enough that Google can interact with your daily life and learn how to converse with you, it’s even stranger that Google can intuitively organize your photos. For instance, if I had photos of myself and my husband, and I wanted all those photos in one folder, Google Photos can now automatically detect any photos taken that contain my husband and I. Taking it one step further, Google will also introduce Shared Libraries, where I can share photos with my husband automatically, and Google can suggest which ones should be shared. Shared Libraries doesn’t mean all your photos are shared, just the ones you choose.
Think that’s cool, how about Suggested Sharing?
That’s right, Google can now suggest who you should share photos with. For instance, if I’ve taken a picture of some friends, Google will know who those friends are and suggest that I share the photo with them. This can be especially helpful for someone who is blind or vision impaired, since often times photos don’t get organized the way we took them or labeled properly. There’s no longer an excuse for hanging on to that photo you took at the party, Google knows it needs sharing.
Making Photobooks easier
With all of this organization and sharing, snapping picture after picture of all kinds of life events, wouldn’t it be nice to share some of those photos with people who don’t use technology? Or perhaps you want to preserve them as memories from special occasions, life events, or family get togethers? Google Photobooks has you covered. It since it’s all done through the app, or on the Google website, it has the potential to be one of the most accessible means of making an album. No more relying on sighted employees to help you find those lost treasures in your photo album at Walgreens. One thing you may still want assistance with, however, is the finished product. Before you send for that Photobook, check that Google has given you the correct information about your photos, that you actually do have the picture of your dog and not the neighbor’s dog.
Final Thoughts
For the past couple years, Google has been focused on making AI the center of our lives. This “AI First” approach has spawned a ton of innovation across all platforms. From the inclusion of Google Lens in Photos, Maps, and Google Assistant, to the development of Neural Networks capable of identifying such things as Voice profiles and types of cancer, Google is focused, it would seem, on bringing Machine Learning to all. The possibilities are endless, even for those who have vission impairments. With AI, it could soon be possible to navigate the world more independently, obtain information more easily, and share in visual concepts previously inaccessible. I hope that, in the coming months, we will see all of these innovations in full and be able to take advantage of them accessibly.
We would love to hear from you, our listeners. For those who watched the Google I/O 2017, or who plan to watch, we would love to hear your thoughts about the latest innovations from Google. How do you see these things helping you throughout your day?