back to article Now that's sticker shock: Sticky labels make image-recog AI go bananas for toasters

Despite dire predictions from tech industry leaders about the risks posed by unfettered artificial intelligence, software-based smarts remain extremely fragile. The eerie competency of machine-learning systems performing narrow tasks under controlled conditions can easily become buffoonery when such software faces input …

Unhappy

Still no laughing matter

Seeing as how I expect "AI" and "ML" to be pushed out regardless by our masters and overlords, no matter how faulty and erroneous it is, I am always reminded of that scene in "Brazil", where a smooshed fly causes a name to be mis-identified and the wrong bloke gets arrested and tortured to death.

26
0
Anonymous Coward

Re: Still no laughing matter

They litterally just announced using AI and ML to vet social relief funding in the UK.

1984 is a few decades behind it would seem.

11
0
Silver badge

Re: Still no laughing matter

Yes but think of the fun we'll have sharing exploits to fool AI in hilarious ways!

3
0
Silver badge
FAIL

Re: Still no laughing matter

To call the claim vetting process AI is pure bunkum. It is more akin to basic intelligence methods from about the 19th (if not before), century. The claim processing will check for repeated information in multiple claims for multiple locations, e.g. same bank details, telephone number, same or very similar details in claim letters,etc. and pass them for the wetware to examine with more care.

That sounds like what I used to call basic pattern recognition though often done by human means when tracking down other sorts of crimes.

12
1
Silver badge

Re: Still no laughing matter

"pass them for the wetware to examine with more care."

Pass them to the wetware for them to fulfil their required quota of rejected claims.

4
3
Silver badge

Re: Still no laughing matter

And the sticker seems to be similar to the dazzle pattern used to disguise ships in the early 19th century.

3
0
Silver badge

Re: Still no laughing matter

"And the sticker seems to be similar to the dazzle pattern used to disguise ships in the early 19th century."

1914-1918 is the 20th century.

Does the image here represent the actual sticker used, or did someone read the story and then photograph a toaster then go nuts with Instagram filters? Should I see a toaster when I look at the sticker?

If you paste a picture of a toaster into a photograph of a banana, should AI not see a toaster in the picture?

What about the "door security" scene in [The Fifth Element]?

1
0
Silver badge

Current AI development reminds me of Internet security

A- Throw something semi functional out there.

B- Somebody breaks it.

C- Patch what breaks, try again.

D- Go to step B.

E- Success! (Shame you can't get here)

23
0
Silver badge

Re: Current AI development reminds me of Internet security

If only it was possible to program some sort of artificial brain to perform these set of steps for itself

6
0
Bronze badge

"The researchers conclude that those designing defenses against attacks on machine learning models need to consider not just imperceptible pixel-based alterations that mess with the math but also additive data that confuses classification code."

Which pushes the Butlerian Jihad further off by another century, maybe more. Species traitors will go to the melt at the same time as the machines.

6
0

Great

Where can I get a few of these, for when the machine uprising kicks in and I can be safely identified as a toaster and carry on my daily business..

20
0

Re: Great

You might not be so happy when another machine comes along and tries to insert the bread!

24
0
Anonymous Coward

Re: Great

"I'm sorry Sir, it says here that electrical gear must be stowed in the hold"

12
0
TRT
Silver badge

Re: Great

Aah, so you're a waffle man.

27
0
Silver badge

Re: Great

You might not be so happy when another machine comes along and tries to insert the bread!

Ah, the future,

where and added peril of being a anti-cyber activitst is a toasting muffin jammed somewhere uncomfortable.

Almost a harmless fetish, considering the tribes of roving lethally armed convenience devices with learning difficulties zipping about.

10
0
Anonymous Coward

Re: Great

It's not the bread you have to worry about if they are unsure what is a banana.

9
0
Silver badge

Re: "Where can I get..."

Taking you terribly, terribly seriously, https://arxiv.org/pdf/1712.09665.pdf

1
0
Silver badge

Re: Great

You just need your bog standard psychedelic t-shirt and you're sorted. Machines don't have good trips when they're confronted with LSD.

2
0

Re: Great

Clearly it's a female aadrvark.

0
0
Silver badge
Pint

To be fair...

To be fair, the "psychedelic graphic" does (sort-of) look a bit like a highly-reflective, chrome-plated toaster in a colourful environment.

First thing that the AI creators might wish to consider is the possibility that the image frame contain more than one object. Assuming one object is precisely daft.

I've noticed BS claims are being made (on the various tech new shows) about facial recognition again. BBC WS mentioned about a system that seemed to claim pulling accurate recognition from only a few pixels. They should make it a capital offense to over-hype such things.

18
0
Silver badge

Re: To be fair...

Do that and you can still confuse the system by making things that look like more or fewer items than they really are or should really be seen as a collective rather than individual items. Worse still, this kind of trickery can work on humans (think the old attached-by-transparent-thread prank), so good luck getting a machine to work out this kind of trickery.

2
0
Silver badge

Re: To be fair...

Yes, this is typical of narrow minded and daft "AI" attempts.

What is in the picture is an open ended question, however they are attempting to train it on "what single object is in the picture" which, when presented with a picture with multiple objects in it, even if one happens to be represented by a sticker, naturally fails.

1
0
Anonymous Coward

Ahah!

So this is why people in the future are shown wearing shiny silver suits.

To ward off the rise of the machines...

19
0
Silver badge
Big Brother

Re: Ahah!

You know too much, Citizen...

3
0
Bronze badge

Just in case

I for one welcome our new robot overlords

I'm the one hiding behind the banana with a glittery sticker in my hand

3
1
Silver badge
Joke

Re: Just in case

But what happens if ED-209 mistakes your glittery stickered banana for a gun?

0
0
Silver badge

I was taught that bananas make excellent weapons

But then I had a very silly teacher.

3
0
Bronze badge

The procedure

to follow is to subtract the "recognized" object from the image until no object is found above a certain threshold. In that case the banana will be found after the toaster.

0
0
Silver badge

Re: The procedure

Not if the sticker is put ON the banana, thus making the end result NO banana found.

0
0
TRT
Silver badge

Re: The procedure

Multiple object detection... reports top banana, bottom toaster.

0
0
Silver badge

Re: The procedure

Put the sticker ON the banana. Now it reports a toaster and NO banana because it's tricky enough for humans to recognize two separate items on top of each other (they could easily be a combined item where the pieces are stuck together), let alone a machine.

How about this for a challenge. Can a visual recognition system identify something without even seeing it (such as the ball of a paddle ball that you can guess is there because the paddle is not sitting flat, meaning it's probably on top of and covering its ball)?

0
0
Gold badge
Coat

Philosophically intriguing.

This is (in a sense) optical malware, designed to disrupt the normal functioning of a NN.

If multiple stickers were presented to it in a sequence could each disrupt the NN in a specific way?

Could that sequence make the NN do "useful" work for the creators of the image sequence?

And since humans are NN's too, could the same process be applied to us?

At the very least it's a nice little meme to seed a few SF stories.

0
0
Silver badge
Childcatcher

Re: Philosophically intriguing.

Aha- BLIT

By David Langford. Quite Creepy.

3
0
Gold badge
Coat

Aha- BLIT. By David Langford.

After I suggested it I remembered "Snow Crash" loosely hinges around a similar idea.

2
0
Anonymous Coward

is that a toaster in your pocket or are you just pleased to see me? Said the AI to the man with psychedelic pants.

2
0
Silver badge

The new SWATing

Create "This is really a gun" sticker.

Stick it on someone's back.

Wait for police to zoom in and shoot them.

8
0
Silver badge

This is why you can't have an automated adult-image filter of any worth.

The second someone can just put something small onto an image and radically change its categorisation without actually changing the overall nature of the image, you know it's going to end up in things like that to stop unwanted categorisation.

And vice-versa... some poor guy with a hacker's conference sticker on his backpack gets scanned by an automated system as having a rifle as he transits an airport, for example.

Until we understand what the "AI" (pfft) is actually doing to categorise, which criteria it's using, we can't make any comment on its accuracy or otherwise. Train a human to recognise something like a banana and they can tell you they are looking for a particular shape, size, colouration, orientation and apply those criteria using their learned knowledge of the object to identify zipped, unzipped, facing the camera or away, broken, twisted, ripe, unripe, etc. bananas. Train an AI and you literally have no idea whether or not it's just decided "if the center pixel is yellow, call it a banana" or some other random criteria that happens to fit "most" images of bananas but also a huge variety of other images and which can be turned to false detection by anyone willing to experiment.

This kind of "throw data at something AI" stuff is really doomed to failure, except where it really doesn't matter at all and where a human would be cheaper to employ anyway (e.g. a banana factory).

6
0
Silver badge

"Train a human to recognise something like a banana and they can tell you they are looking for a particular shape, size, colouration, orientation and apply those criteria using their learned knowledge of the object to identify zipped, unzipped, facing the camera or away, broken, twisted, ripe, unripe, etc. bananas."

And then you trick them with a plantain...or a carefully-sticked-back banana with something else within. We can fool humans. Machines don't stand a chance.

"This kind of "throw data at something AI" stuff is really doomed to failure, except where it really doesn't matter at all and where a human would be cheaper to employ anyway (e.g. a banana factory)."

Not necessarily. Remember that humans have continual costs and limited working hours. Why else do you think machines are replacing humans elsewhere?

4
0
Silver badge

There's a difference there...

That's quite a reasonable mistake to make. Thinking any silver blob next to a banana turns the banana into a toaster is not.

The human will apply the categories learned, and adjust if you say "no it's not". The AI can't without expensive retraining from scratch, and such retraining is liable to taint existing detection too. The human learns, the machine doesn't (despite the moniker "machine learning").

Everywhere I see computers replacing humans they are incredibly dumbed down and not applying intelligence at all. Supermarket checkouts... are they "guessing" user's ages like humans do? No. They need a human. You use computers and machines where you can describe the task required exactly. If you can't it has unreliable and unpredictable results. Anywhere it matters, you have a human. Anywhere it doesn't matter (e.g. a banana factory), well, it doesn't matter. Human or computer are on a par because the computer might be quicker but it's dumber too.

The car park wouldn't let me out last night as it read my number plate (beginning with LL) as something else for the ticket (beginning with CL). I had to actually put the ticket into the machine.

Pretty much this is what AI / ML / recognition has always been... works okay, but far from infallible, and only utilised where it doesn't matter about being wrong. Voice recognition literally cannot understand my voice, but all humans who speak my language can. Image recognition is essentially atrocious and easy to mislead without extra controls. Text recognition is the entire basis of using CAPTCHAs... computers are so bad at it and always have been (who actually OCRs nowadays?). Anything requiring interpretation of complex data... don't give it to a machine unless the machine is told exactly what to do.

This is precisely why you don't want a "self-driving" car, by the way. Not that you can't make a self-driving car. But one that tries to be human to self-drive is a dangerous and unreliable beast.

We are literally DECADES at least from any decent amount of AI, I would actually posit that we DON'T have it, in any substantial form, today. Precisely because you cannot tell what it's doing, therefore cannot control it sufficiently, therefore cannot fix it when it's wrong.

3
0
Silver badge

"This is precisely why you don't want a "self-driving" car, by the way. Not that you can't make a self-driving car. But one that tries to be human to self-drive is a dangerous and unreliable beast."

The problem with this example is that the HUMAN is a PROVEN dangerous and unreliable beast, given the spate of traffic accidents reported in the news everyday. Not to mention the human fallibilities of fatigue, drug inducement, anger, etc. and you've just set a very low bar.

2
0
Anonymous Coward

Surely a sticker with a picture of a toaster on it would work just as well?

1
0

I'm wondering why we need AI to distinguish toasters from bananas in the first place.

9
0
Silver badge

Am I the only one wondering....

... what happens if they put that sticker next to a toaster?

6
1
Silver badge

Malice not necessary

"The eerie competency of machine-learning systems performing narrow tasks under controlled conditions can easily become buffoonery when such software faces input designed to deceive."

That last part should read "...when such software faces input it wasn't trained on.". The fundamental problem is that machine learning relies on a very limited learning set with the assumption that it will be representative of everything it will ever encounter in the real world. Since there are effectively infinite possible images, that assumption is never true. Deliberately designing input to fall outside the trained area obviously finds issues more easily, but once we start rolling out these systems to analyse billions of daily images from security cameras, social media, and so on, such issues are going to pop up constantly even with no design involved at all.

1
0
Silver badge

Re: Malice not necessary

So what's missing in machines that makes humans better able to work outside the box?

0
0
Silver badge

Re: Malice not necessary

"So what's missing in machines that makes humans better able to work outside the box?"

If I knew the answer to that, I'd be far too rich to post it here.

0
0
Bronze badge

Only a few pixels

"I've noticed BS claims are being made (on the various tech new shows) about facial recognition again. BBC WS mentioned about a system that seemed to claim pulling accurate recognition from only a few pixels. They should make it a capital offense to over-hype such things."

You'd be surprised how much data can be obtained by just ONE single pixel:

https://www.facebook.com/impression.php/f2441a81bf8bca8/?lid=115&payload={%22source%22%3A%22jssdk%22}

1
0

To be fair...

that "psychedelic graphic" does look like a toaster to me, so is it really so shocking that an AI thinks so too? If anything it's showing its human like thinking.

1
0
Anonymous Coward

AKA

BUY OUR GOOGLE AI INSTEAD!

1
0

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2018