Site icon COOL BLIND TECH

How Alexa, Siri, , Google Assistant Can be Listening Without You Knowing It

Cool Blind Tech logo

Voice assistants connected to smartphones and other devices have been hijacked using sounds above the range of human hearing. Once in, hackers have been able to make phone calls, post on social media and disconnect wireless services, among other things.

Assistants falling for the ploy include Amazon’s Alexa, Apple’s Siri, Google Now, Samsung S Voice, Microsoft Cortana and Huawei HiVoice, as well as some voice-control systems used in cars.

The hack was created by Guoming Zhang, Chen Yan and colleagues at Zhejiang University in China. Using ultrasound, an inaudible command can be used to wake the assistant, giving the attacker control of it as well as access to any connected systems.

“If all a voice assistant could do was set an alarm, play some music or tell jokes, then there wouldn’t be much of a security issue,” says Tavish Vaidya, a PhD student and researcher at Georgetown University. But voice assistants are connected to an increasing number of services, including smart thermostats and Internet banking, so security breaches can be serious.

The attack works by converting the usual wake-up commands — “Okay, Google” or “Hey, Siri” — into high-pitch analogues. When a voice assistant hears these sounds, it recognizes them as legitimate commands, even though they are imperceptible to the human ear.

The Chinese team was able to open a website to download malware and start a video or voice call to spy on its surroundings. The researchers were also able to send text messages and publish posts online.

The attacker would need to be near the target device to hack it, but it may be possible to play the commands via a hidden speaker as the hacker walks by.

Not all devices were easily hacked. Taking control of Siri, for example, required an extra step. The owner’s voice had to be surreptitiously recorded for playback because Apple’s system recognizes the speaker. “There are a lot of variables for the attack to succeed outside of a controlled environment,” Vaidya says.

To secure voice assistants, sounds outside the range of the human voice might be suppressed, or machine learning algorithms might be developed to blunt attacks, Vaidya says. We should focus on protecting against unauthorized commands rather than limiting what assistants can do, he adds.

Source

Exit mobile version