People who can’t see well or with physical or mental difficulties that prevent them from driving safely often rely on others, local government or nonprofit agencies to help them get around.
Autonomous vehicle technology on its own is not enough to help these people become more independent, but simultaneous advances in machine learning and artificial intelligence can enable these vehicles to understand spoken instructions, observe nearby surroundings and communicate with people. Together, these technologies can provide independent mobility with practical assistance that is specialized for each user’s abilities and needs.
A lot of the necessary technology already exists, at least in preliminary forms. Google has asked a blind person to test its autonomous vehicles. And Microsoft recently released an app called “Seeing AI” that helps visually impaired people better sense and understand the world around them. “Seeing AI” uses machine learning, natural language processing and computer vision to understand the world and describe it in words to the user.
Texas A&M University and the Texas A&M Transportation Institute are developing protocols and algorithms for people with and without disabilities and autonomous vehicles to communicate with each other in words, sound and on electronic displays. The self-driving shuttle has given rides to 124 people, totalling 60 miles of travel and found that this type of service would be more helpful than current transportation options for people with disabilities.
Under the Americans with Disabilities Act of 1990, all public transit agencies must offer transportation services to people with physical handicaps, visual or mental conditions or injuries that prevent them from driving on their own. In most communities, this type of transport, typically called “paratransit,” is sort of like an extra-helpful taxi service run by public transit. Riders make reservations in advance for rides to, say, grocery stores and medical appointments. The vehicles are usually wheelchair-accessible and are driven by trained operators who can help riders board, find seats and get off at the right stop.
Like taxis, paratransit can be costly. A Government Accountability Office report from 2012 provides the only reliable nationwide estimates. Those numbers suggest that per trip, paratransit costs three to four times what mass transit costs. And the costs are increasing, as are the number of people needing to use paratransit. At the same time, federal, state and local funding for transit authorities has stagnated.
In an attempt to meet some of the demand, many communities have reduced the geographic areas where paratransit is available and asked disabled people to use mass transit when possible. Other places have experimented with on-demand ride-hailing services like Uber and Lyft. But in many cases the drivers are not trained to help disabled people, and the vehicles are not usually wheelchair-accessible or otherwise suitable for certain riders.
“Autonomous shuttles, like the one we’re testing on the Texas A&M campus, can be a solution for these problems of access and funding. We envision a fully integrated system in which users can connect to the dispatching system and create profiles that include information on their disabilities and communications preferences as well as any particular frequent destinations for trips like a home address or a doctor’s office,” Srikanth Saripalli, Associate Professor in Mechanical Engineering at Texas A&M University said.
When a rider requests a shuttle, the system would dispatch a vehicle that has any particular equipment the rider needs, like a wheelchair ramp or extra room, for instance, to allow a service dog to travel.
When the shuttle arrives to pick up the rider, it could scan the area with lasers, cameras and radar to create a 3-D map of the area, merging those data with traffic and geographic information from various online sources like Google Maps and Waze. Based on all of those data, it would determine an appropriate boarding spot, identifying curb cuts that let wheelchairs and walkers pass easily as well as noting potential obstacles, like trash cans out for collection. The vehicle could even send a message to the rider’s smartphone to indicate where it’s waiting, and use facial recognition to identify the correct rider before allowing the person to ride.
During boarding, the ride and when the rider reached the destination, the vehicle could communicate any relevant information such as estimated arrival time or details about detours by interacting with the rider as appropriate and listening to the responses, or by displaying text on a screen and accepting typed input. That would allow the rider and the shuttle to interact no matter what the passenger’s abilities or limitations might be.
Srikanth Saripalli says, “In our lab we are exploring various elements of rider-assistance systems, including automated wheelchair ramps and improved seating arrangements for multiple wheelchair-using passengers. We are also studying elements that affect safety, as well as riders’ trust in the vehicles. For example, we are currently developing machine-learning algorithms that behave like good human drivers do, mimicking how humans respond to unforeseen circumstances.”
Self-driving cars present fundamentally new ways to think about transportation and accessibility. They have the potential to change neighborhoods and individuals’ lives. With proper planning and research, autonomous vehicles can provide even more people with significantly more independence in their lives.