You're in a desert and you're walking along in the sand and all of a sudden you look down and...
It's possible to reverse-engineer AI chatbots to spout nonsense, smut or sensitive information
Machine-learning chatbot systems can be exploited to control what they say, according to boffins from Michigan State University and TAL AI Lab. "There exists a dark side of these models – due to the vulnerability of neural networks, a neural dialogue model can be manipulated by users to say what they want, which brings in …
COMMENTS
-
-
Saturday 21st September 2019 07:33 GMT amanfromMars 1
"That's one small step for [a]man, one giant quantum communications leap for mankind."
Take a stock sentiment analyzer Combine that with a story generator, a GAN, and you can create lots of negative or positive stories about a stock, for other sentiment analyzers to read. .... KCIN
Howdy, KCIN,
Take a non-stock sentiment analyser and combine that with a story generator, a GAN, and you can create lots of negative or positive stories about a stock, for other sentient analyzers to read and further process.
Some would advise you that is the/a Present New Fangled and Entangled Universal Battle Space for Capture and Captivation of Hearts and Minds ..... Human Perception.
Think AWE20 on Steroids ....... Advanced Warfighter Experimentation.
-
-
Friday 20th September 2019 13:31 GMT Charlie Clark
Shock, horror: unsupervised chatbot can be subverted
At least, I think that's what the article said. But I think it was actually saying, that if you can guess the model that a particular bot is using, you can trick it into saying things it shouldn't.
Fortunately, there aren't many unsupervised chatbots out there doing anything. This is one of the reasons why Google, Amazon, et al. have been found out listening in to what people tell their "frozen" bots so that they can improve them, but basically they're just a front-end to existing systems.
I think domain-specific chatbots are vast improvement on the rules/script based approaches to first level support, but the key is keeping them dumb enough to do the task in hand and at least one API away from sensitive information: what they can't access, they can't divulge.
-
-
Friday 20th September 2019 22:02 GMT Anonymous Coward
Re: Anyone surprised?
Not just AI reseachers .... probably around 10 years ago "activists" iin the US were gaming the Amazon recommendation system so people looking at books by republicans got "interesting" suggestions for what "people who lloked at this also looked at ...". Then there was the person who during the lead up to the 2nd Gulf War managed to seed webpages so that Google responded to a search for "Great French Military victories" returned results saying "did you mean Great French Military defeats" with a suitable page they'd produced as first choice.
-
Saturday 21st September 2019 08:23 GMT Anonymous Coward
Re: Anyone surprised?
One wonders if the current crop of kids even bothered to read the research from back then.
Read??? What a quaint idea! The bunch at work can't be bothered to Read The Fine Manual for damned near every thing that they use, have the memory capacity of a gnat, with me having to keep reminding them of things that I've gone through with them.
-
-
Friday 20th September 2019 22:27 GMT Anonymous Coward
Easily fixed
Don't allow further learning after the initial training in an automated fashion. That lets people get real time results for their tomfoolery.
Instead have it only use the training data, and carefully feed it additional training manually (which could be the conversations it had during its first month) and put this "smarter" chatbot out to a small population to for testing to make sure it didn't learn anything it you don't want it to.
Though I have to say if you are training it with 2.5 million Twitter conversations it would take a lot of effort to make it worse off!
-