Re: The Nasty Little Truth About Deep Learning
Machine learning of a trained network tends to have a logarithmic pattern - it learns quickly at first, then quickly plateaus and it takes a lot to "untrain" it onto something else.
This is why most of these "AI" things peak with basic functionality, because after 100 trainings it might get the idea, but between 100,000 and 1,000,000 trainings it improves very little indeed. And it also becomes MUCH harder at that point to change what it was trained on... because it may have reinforced the wrong parameters a million times and you can only feed it a handful of corrections.
This is why Google doesn't have just one massive AI that they use to do all their AI jobs (e.g. "Viki"). They start fresh each time and retrain only on what they want it to know. Because when the plateau strikes, it's no longer any fun to beat your head against a brick wall. Their Go robot loses at poker, their poker robot loses at Go.
It's also the reason that PhD students in the area can operate - train a model, get it to do something interesting, realise that you can't make it do any more, write up paper, flee for some high-paid job.
Anything sold to you as "AI" today is lying. It's not even close. It's just a huge statistical model with heuristics to tune it to what you want it to do. It's not intelligent in any way, it's just seeking statistical similarities with its training material. The more training material, the slower, harder and less reliable a particular result will be (e.g. train it to see bananas and apples and it will start to classify things in the wrong group, as opposed to just training it to see bananas and saying yes/no). And the best bit - being "AI" you have absolutely no idea what criteria it's judging on. You train it, sure, but is it just looking for "image is mostly yellow" or "image has mostly yellow in the middle" or "image has a curve" or what? You have no idea the hidden criteria it's associating with the image of the bananas you're training it on. Which means you have no idea how it will react to any one image, that you have to counter-train it (i.e. give it lots of things that are not bananas), and you will also find it very difficult to modify its behaviour later on if it turns out to not be looking for what you think.
Pretty much, there's not much difference between what people are pushing as "AI" and a Bayesian spam filter. Sure, they're useful. But they are far from reliable or predictable. And at the end of the day it takes a human to feed it enough data (not just emails but "This was spam", "This wasn't spam") to actually get close to useful, and then it can be easily undone by anything it's not encountered before.
That's a worrying facet for a machine that's driving your car in the real world. Pretty much if a UFO were to park itself on the M25, people would still recognise it as a hazard and know how to stop their cars safely. "AI" like this won't necessarily, and you have absolutely no way to tell what it'll do until the day it happens.