Laser beams can trick a voice-controlled virtual assistants like Siri, Alexa, or Google Assistant into acting as if they registered an audio command, researchers report.
This trick worked at a distance of more than 300 feet and through a glass window.
The researchers discovered in the microphones of these systems a vulnerability that they call “Light Commands.” They also propose hardware and software fixes, and they’re working with Google, Apple, and Amazon to put them in place.
“We’ve shown that hijacking voice assistants only requires line-of-sight rather than being near the device,” says Daniel Genkin, assistant professor of computer science and engineering at the University of Michigan. “The risks associated with these attacks range from benign to frightening depending on how much a user has tied to their assistant.
“In the worst cases, this could mean dangerous access to homes, e-commerce accounts, credit cards, and even any connected medical devices the user has linked to their assistant.”
The team showed that Light Commands could enable an attacker to remotely inject inaudible and invisible commands into smart speakers, tablets, and phones in order to:
- Unlock a smart lock-protected front door
- Open a connected garage door
- Shop on e-commerce websites at the target’s expense
- Locate, unlock, and start a vehicle that’s connected to a target’s account
Just five milliwatts of laser power—the equivalent of a laser pointer—was enough to obtain full control over many popular Alexa and Google smart home devices, while about 60 milliwatts was sufficient in phones and tablets.
To document the vulnerability, the researchers aimed and focused their light commands with a telescope, a telephoto lens, and a tripod. They tested 17 different devices representing a range of the most popular assistants.
“There is a semantic gap between what the sensors in these devices are advertised to do and what they actually sense, leading to security risks,” says Kevin Fu, an associate professor of computer science and engineering . “In Light Commands, we show how a microphone can unwittingly listen to light as if it were sound.”
Users can take some measures to protect themselves from Light Commands.
“One suggestion is to simply avoid putting smart speakers near windows, or otherwise attacker-visible places,” says Sara Rampazzi, a postdoctoral researcher in computer science and engineering. “While this is not always possible, it will certainly make the attacker’s window of opportunity smaller. Another option is to turn on user personalization, which will require the attacker to match some features of the owner’s voice in order to successfully inject the command.”
Additional researchers from the University of Electro-Communications in Tokyo and the University of Michigan contributed to the work.
Source: University of Michigan