Sales
0161 215 3814
0800 953 0642
Support
0800 230 0032
0161 215 3711

Voice Your Concerns

Suggesting that anyone from the killer in Scream, to Bane, to a kid with a voice distortion box has the potential to mess with our devices, an expert has found that not only will some voice activated tech respond to a range of human voices – not just your own – it will do the bidding of fake voices just as easily as human ones. So, whilst voice activation tech is tipped to be a big thing for the future, how can we stop it giving a voice to tech tricksters by accident?

voice_activated_smartphone

Yuval Ben-Itzhak, chief technology officer at anti-virus org AVG, managed to turn on and control a (not so) smart TV using a synthesised voice. He said the command worked because the gadget didn’t really do any checks to see who it was talking to, and found that voice-activated functions on Apple and Android devices responded in the same way.

Ben-Itzhak said: “Utilising voice activation technology in the Internet of Things without authenticating the source of the voice is like leaving your computer without a password, everyone can use it and send commands.” And you know what happens if you leave your computer unlocked: best case, some joker defames your good name on Facebook; worst case you get rinsed of all your money. Same rules apply here!

Even worse – our lives are becoming a network of devices that are all connected and that we’re increasingly reliant on; so if someone was able to control one of those things, they could potentially, say, persuade our TVs to buy a hundred copies of the One Direction movie, or get our fridge to order a million pints of milk. And with each new version of a device, and each new update, there’s more and more potential for havoc.

On the other hand, you’d arguably have to be quite close to the device launch this kind of warfare, and – if you’re an Apple lover – at the moment your iPhone needs to be plugged into mains power for Siri to be woken up by voice alone. If the devices were only activated by a certain person’s voice – like with fingerprint tech – then that might help; and asking someone to repeat a phrase they’d prerecorded and see if it matches is another safeguard that’s starting to be seen.

Ben-Itzhak is saying he did the research to show how it could potentially be dangerous, rather than having actually seen any examples of it in the wild, but clearly this is something we need to sort now. If the past few months and all the recent attacks have taught us anything, it’s that all new tech needs to be built with security in mind, as hackers are getting increasingly creative with how they get to our sensitive information.

So, whilst this doesn’t seem to be something that we full on panic about now, we should definitely make security fears heard; hopefully then we can ensure that the voice tech of the future is speaking for – rather than against – us.

If you’ve got any queries about the security solutions at UKFast, take a look at our website or contact us on 0208 045 4945.

Share with:

Enjoy this article?