What if smart devices could be hacked with just a voice?

Smartphones and wearable devices have introduced a brave new world in the way that humans and computers interact. While on the PC we used the keyboard and mouse, touch-based devices and wearables have removed the need for peripherals and we can now interact with them using nothing more than our hands or even our voices.

This has prompted the arrival of the voice activated “personal assistant”. Activated by nothing more than our voices, these promise to help us with some basic tasks in a hands-free way. Both Apple and Google added voice recognition technologies to their smart devices. Siri and Google Now are indeed personal assistants for our modern life.

Both Siri and Google Now can record our voice, translate it into text and execute commands on our device – from calling to texting to sending emails and many more.

However, these voice recognition technologies – that are so necessary on smart devices – are perhaps not as secure as we give them credit for. After all, they are not configured to our individual voices. Anyone can ask your Google Now to make a call or send a text message and it will dutifully oblige – even if it’s not your voice asking.

What if your device is vulnerable to voice commands from someone else? What if it could call a premium number, send a text message abroad, or write an email from your account without your knowledge. Over–the-air-attacks on voice recognition technologies are real, and they are not limited just to smartphones. Voice activation technologies are also coming to smart connected devices at home, like your smart TV.

As I demonstrate in this short video, the smart devices in my home do respond to my voice, however they also respond to ANY voice command, even one synthesized by another device in my home.

 

 

The convenience of being able to control the temperature of your home, unlock the front door and make purchases online all via voice command is an exciting and very real prospect. However, we need to make progress with the authentication of the voice source. For example, will children be able to access inappropriate content if devices can’t tell if it is a child speaking or a parent?

Being able to issue commands to my television might not be the most dangerous thing in the world but new smart devices, connected to the Internet of Things are being introduced every day. It may not be an issue to change the station on my television, but being able to issue commands to connected home security systems, smart home assistance, vehicles and connected work spaces is not far away.

Utilizing voice activation technology in the Internet of Things without authenticating the source of the voice is like leaving your computer without a password – everyone can use it and send commands.

 

 

There is no question that voice activation technology is exciting, but it also needs to be secure. That means, making sure that the commands are provided from a trusted source. Otherwise, even playing a voice from a speaker or an outside source can lead to unauthorized actions by a device that is simply designed to help.

 

An Emerging Threat

While we haven’t discovered any samples of malware taking advantage of this exploit in the wild yet, it is certainly an area for concern that device manufacturers and operating system developers should take into account when building for the future. As is so often the case with technology, convenience can come at a risk to privacy or security and it seems that voice activation is no different.

Leave a Reply