Reply to post: Re: Tainting of Training Datasets etc

Explain yourself, mister: Fresh efforts at Google to understand why an AI system says yes or no

Martin an gof Silver badge

Re: Tainting of Training Datasets etc

The tanks one has been around for ages. I'd almost swear I remember being told that story back in the 1990s when computers started to become powerful enough and cheap enough to dream that they might one day be usefully employed on this sort of task (think - automated defenses).

The way I remember it was that the system was supposedly trained to distinguish between 'friendly' and 'enemy' tanks and when it failed in real life they discovered that all the images of friendly tanks had been well-lit, uncluttered images while the 'enemy' images were grainy, often taken on dull or wet days, so the model had boiled it down (essentially) to sunny = friend, rainy = enemy. Of course, back then it wasn't called 'AI', it was an 'expert system' or somesuch.

I wondered at the time whether getting a system to recognise a 'whole' was really the right way to do it, when recognising 'parts' might be easier and the recognition of the whole can be based on the parts recognised and their physical relationships.

Maybe it needs additional inputs as you suggest - IR is a good start, and radar. The military already have tracking systsms for missiles that use these senses in 'intelligent' ways. Combining with depth information would also provide additional data points.

Judging by what I see in cars though, the goal seems to be to pertorm recognition on the least amount of information possible - often images from a single simple camera. The speed sign recognition system in my wite's car is proof positive that this approach doesn't work, even in exceptionally simple and limited use cases!

M.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon