Reply to post:

When you play this song backwards, you can hear Satan. Play it forwards, and it hijacks Siri, Alexa

EveryTime Silver badge

We need a checkbox response for "security hacks".

This one would be "You construct your own (flawed) system that you claim is similar to the original, and hack that, never proving the hack works on the readily-available original."

Neural networks are inherently untrustworthy. It's trivial to train a bad one that superficially appears to work. There are many stories of NNs that were later found to be deeply flawed. One was a tank / APC image recognition that was dramatically 'better' than humans, spotting tanks that were expertly concealed. It turned out that it was classifying road ruts as positives, not armored vehicles.

There are now tools that help visualize intermediate node responses on specific types of Tensorflow networks. But that's a tiny fraction of the systems, you need to be an expert to understand what you are seeing, and it only works for images. It's actually more directly useful for figuring out the system is flawed than improving it (although one can lead to the other). But note that it requires access to the intermediate nodes -- which isn't known to the end user with the cloud processing of Alexa and the like.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019