How many billions of dollars are being spent chasing this?
Wouldn't it be much more efficient to simply teach people how to drive properly in the first place?
Self-driving cars won’t learn to drive well if they only copy human behaviour, according to Waymo. Sure, people behind the wheel are more susceptible to things like road rage or the urge to run red lights but they’re pretty good drivers overall. When researchers from Waymo, the self-driving arm under Google’s parent company …
"In shock news, AI doesn't adequately map a solution space without being given sufficient training data representative of that solution space."
And this is why I'm not particularly worried about AI taking over anytime soon. A driver might not be trained in every situation but can generalise and extrapolate (ok, some not so much). A neural net only knows what its been taught and can only extrapolate to a small extent. A better example is image recognition - a net has to be shown thousands, perhaps millions of pictures of dogs at all different angles before it can recognise a dog with reasonable accuracy. A baby only has to be shown a dog once or twice and it knows "dog". Until ANNs can do this then they're little more than useful but dumb statistical recognisers.
millions of pictures of dogs at all different angles before it can recognise a dog with reasonable accuracy. A baby only has to be shown a dog once or twice and it knows "dog"
Unfortunately I only seem to get quizzed about buses, store fronts, traffic lights, "crosswalks" and fire hydrants whenever I have to prove I'm not a robot. I assume they'll get around to "people", "dogs" and "babies" at some stage - until then, stay off the streets!
A baby only has to be shown a dog once or twice and it knows "dog".
At what age? It takes months for babies to be able to do anything non-instinctive. There do seem to be some hardwired behaviour such as identifying and watching facing but cognitive processing takes years. Oh, and there are lots of examples of how easy it is to fool human cognitive processing precisely because it depends upon some of the shortcuts we see in some machine learning.
> At what age?
My 2 year old saw a dear for the first time ever, while we were all looking the other way, and immediately said "goat!"
Which, considering she'd only ever seen goats once, about 3 months before, was a pretty good extrapolation, and one that no computer model I'm aware of could currently match.
So I guess "baby" is an exaggeration, but "infant" or "toddler" would be more accurate.
I just typed 'sheep' into my google photos app. I have never asked it to search my photos for that before. Not only did it find pictures of sheep, some were in the far background, others were cartoon characters on a mug.
Pretty good for a device that has never been asked about one before.
Ahh... but they will have machine learning so will know what a sheep looks like from millions of captioned photos!
Yes, exactly the good thing about the rise of the machine is that they can instantly learn from each other and get new knowledge at the same time. They are simultaneously learning form thousands or millions of hours of experience every day. Every person needs to re-learn for each one born. Every person has to go through a driving test and get instructed and read the highway code and learn from experience getting very different end results in the process.When they go abroad they need to learn new rules (and driving styles) as they go.
A machine can do it instantly, new road regulations could (in theory) come into force with a few days notice - if it was a simple rule, rather than relying on a big publicity campaign.
Assuming limited failures, once learnet they will never be influenced by tiredness, music, a bad day, stress, etc.
While all this "congnitive processing takes years" sounds good in theory, you have to remember, your child is learning from day 1.
First day they are learning how to see. Like, literally what is up, down, left, right of the visual feed, and how to map it to movements of the eyes and/or rest of the body. It may mean the other learning and processing things look to kick in later, but in reality it's all being built on and processed from the beginning.
It is just that a child only gets the linguistic ability and the reward/risk feedback later on to actually do something about most of the more complex stuff. The training data for a self driving car may be "road signs, road markings", as a data set compared to what a toddler or newborn baby learns, is childsplay.
"t takes months for babies to be able to do anything non-instinctive."
Don't think that nothing's happening during that time. For one thing it's correlating what it can see with what it can touch and coming to understand the concept of solid objects. At that point it's achieved something that AI doesn't do. It might be one reason why the AI crashes into things as reported in TFA; it doesn't know that the car in front is solid because it doesn't understand solid (or anything else for that matter).
'shock news' heh.
from article: "Neural networks are notoriously data hungry; it takes them millions of demonstrations in order to learn a specific task."
Well, in theory, 'once learned' the concepts can be copied. But I suspect that using raw neural network learning is grossly inefficient.
Some things are intuitively obvious, like staying in the lane, stopping at a stop sign, and so on. Being able to recognize what a "lane" is and what a "stop sign" is should be solvable as separate problems.
But of course, there do not appear to be enough details as to how they're really going about this.
I see this, instead, as an opportunity to just "hard code" some basic rules in there, to avoid having to run a million simulations that come up with the same "conclusion" in the AI [and it'll probably RUN faster on the hardware]. So "Nice Try" to the AI people, who are probably being like the proverbial hammer seeing everything as a nail...
It's arguable that for at least the last 70 years most development around cars has been about improving safety and avoiding the problems caused by the meatware drivers. The costs, both financial and in lives, have been staggering.
While I don't think that autonomous vehicles will be suitable for everything, I do think that they will learn faster from their mistakes than each new generation of meatware.
to come up and test their shiny neurals here in Canada, in winter. Come on, guys, you want to replace humans driving ? Show us your programming skills! Oh, and while you are at it consider developing an algorithm for snow shoving and windshield ice grating the most popular winter sports around here.
Thinking about winter weather...
Being blown off of an icy road due to strong crosswinds, especially for high profile vehicles, might be a nice "anomoly" to add to their list
Then there are 'hydroplane' conditions when raining, which might require you to NOT make any sudden adjustments, even if you're outside of a lane. Or lets say you end up spinning anyway and need to recover from it.
It appears to me they're still working on 'fair weather' problems like a child running in front of the car, or someone drifting into your lane.
What would a self-driving car do when the traffic light is red and a police officer makes you a quick sign to go ahead overriding the traffic light ? Or the other way around, telling you to hold while the traffic light turns green ?
How would a self-driving car guess if a pedestrian has the intention to cross the street while he/she is still on the sidewalk ? What if the pedestrian on the sidewalk is in reality waiting for the bus ?
I got a telling off from the police for running a red (at 5MPH, in the middle of nowhere with no other traffic about, and I was already stationary) because there was a police car sat behind me with blues and twos roaring. I said to him something along the lines of "I thought you had something important to be getting to so I moved out of the way". To which the copper got red faced and frothing before jumping back in his car and speeding off.
It was an odd encounter
"they'd just decide among themselves, taking into account area policies and emergency requirements, who gets to cross the junction next, and what, if anything, other vehicles need to do in order to make that happen."
Ahh, I can guess what the BMW and Audi algorithm will be for that one!
"In the UK, at least from a legal perspective, the red light overrules the desire of the emergency services to get past you. There have been cases of people getting nicked for running a red to allow this."
Thats because in this case UK law is unfortunately an ass. Getting out of the way of an ambulance could literally mean the difference between life or death to a patient.
It's not about the emergency services. Around here the law says if there's simultaneously a working traffic light and a traffic agent actively directing traffic, the latter wins no matter what the lights say (and this happens all the time, whenever cops have nothing better to do I suppose). So yeah, it's quite on point asking whether an AI driver would recognize the often relatively subtle gestures those guys use to signal "your turn, get moving..."
Self driving cars are just the next money making opportunity for the silicon valley sociopaths both in selling the tech and running services with wages saved due to no drivers. They don't care if it works 100%, they just care if they can make money out of it. Almost no one in the general public is asking for this tech and its almost certainly a lot way from being ready but that won't stop the sociopaths from persuading governments to license it and people to buy it.
It doesn't need to work 100%
As soon as it's at the same level as your average human, it becomes safer for 50%* of drivers to have it drive them.
We also don't have any data for the number of accidents avoided by self driving cars at this point due to the things that self driving cars are much better than humans at which, generally speaking, is the majority of situations. The point of this article is that the situations you need a human for are rare enough that we just don't have sufficient data - something telling in itself.
*50% defined by ability, not by how good people think they are
"As soon as it's at the same level as your average human, it becomes safer for 50%* of drivers to have it drive them."
So by "average" you mean median?
Your population from which you're taking your average includes a lot of young, inexperienced drivers. They pull the average down. With experience they'll get beyond average. What you're saying is that an autonomous car is good enough if it's at the same level as a driver with some experience but less good than a driver with a few years of safe driving behind them. No thanks.
Your population from which you're taking your average includes a lot of young, inexperienced drivers. They pull the average down. With experience they'll get beyond average.
But they'll be replaced by equally inexperienced drivers so the average is more or less constant. As for getting better… I think that is dependent upon the routine: we get skilled as the journeys we most regularly make.
I was always told that statistically, you're more likely to have an accident closer to home or somewhere you are familiar with. Certainly works for the 2 accidents I've had and most of the near misses in 40 odd years driving UK and Europe. Definitely works if you've been a long way from home and drive tired. Familiarity breeds carelessness. PP
What would a self-driving car do when the traffic light is red and a police officer makes you a quick sign to go ahead overriding the traffic light ?
I just saw this not happening and nearly leading to accident as a result. In many countries the emergency are allowed to run a red light but they do not have the right of way when doing so. More importantly, all the new vehicles do not used such rule-based approaches but can be trained fairly easily using examples. I think the point of the research is that training for everything becomes exponentially more difficult so different approaches are required.
There is no reason why a self driving car can't understand hand signals of a police officer and then it's a simple case of priority of hand signals over that of other rules like red/green lights. Waymo reported a while back that it can interpret hand signals of cyclists better than humans. Generally spotting patterns better than humans should be one thing that they can do well.
If it really was a problem then police could easily adapt to using something to signal to self driving cars - a fluorescent or light up band on their arm like used already in some countries.
Also , humans are not very good at spotting pedestrians running out in front of vehicles, self driving cars are much better. They have 100% conscious ability at all times when working, 360 degree visibility and can spot and track movement and intent much quicker than human reflexes. A human doesn't know whether someone waiting at the side of the road is going to suddenly jump into the road or not but similar cues that a human uses a machine could also use, just a lot quicker and all of the time.
I think there's much more difficult problems for self driving cars to solve than those, however.
Am I the only one thinking this self driving malarky has too many variables? The only way it's going to work in my humble opinion is with some sort of track in or at the side the of the road with meat bags responsible if they get in the way. Add to that self contained ,not connected to the "net", every car run by computer and you might have a winner. I think we have more chance of teleportation being invented first.
I'll just paste here (part of) a comment I posted a few days ago on a similar topic :
The only way to make safe driverless vehicles would be to put them on special lanes, perhaps specifically designed to avoid sharp angles; possibly with a system to keep them on trajectory at all times, like, some manner of metal railing? We could even mitigate the risk of collisions by having a bunch of them physically attached to each other. Oh, and then we could cut costs by devoting the propulsion function to a specialized unit. I think I'm on to something there, I'd better patent the idea before the Internet steals it!
> Self-driving cars won’t learn to drive well if they only copy human behaviour, according to Waymo
I hope it didn't take a PhD for someone there to figure that out. Meatsacks too often drive without reference to prevailing conditions, without anticipating what other meatsacks might be about to do, without a good night's sleep, with screaming kids in the back, paying attention to the radio/GPS/SMS/air conditioning knobs rather than the task at hand, with their seating position and mirrors just wrong, with boredom and wandering minds, without indicating, at inconsistent speeds, in the wrong lanes, towing too much for the rating of the vehicle, without maintaining their vehicles properly, often trained by other incompetent meatsacks who propagate the same bad habits.
As good as a human driver most definitely should not be considered the high watermark.
In other words, people generally don’t crash or veer off track enough for machines to learn from such mistakes. Neural networks are notoriously data hungry; it takes them millions of demonstrations in order to learn a specific task.
Depends who was your driving instructor and where did you learn to drive. If you learned to drive in Northern and Eastern Europe and in winter, there is a 50%+ chance that your instructor has demonstrated you both a spin and how to regain control. In fact, that is what people teach their kids to do in places in USA too. Just not in California.
Reading the article - it is a major step forward, but I still would not trust a Waymo car to be anywhere near me with as little of 2 inches of snow on the road.
yeah anyplace near 'the grapevine' or Donner's pass is ripe for weather-related bad road conditions. Black ice in August? Ew.
A couple of times I had to sleep in my car waiting for I5 South to open over 'the grapevine' (night driving to get home by monday) and one time the CHP escorted everyone behind the snow plows with black ice still on the road. There was stop/go going up the hill with ice and mush underneath our tires. I saw cars pulled off the side of the road similar to mine, drivers apparently unable to get moving again with the icy-slick road after having to stop while pointing up a steep hill. So I wonder if the bots could be taught "the trick" of partially applying the brakes to prevent one tire from spinning really fast, and thereby get some traction on the other tire...
Bullshit. I learned in California.
Based on observing the Californians on Interstate 101 in torrential downpour (I know, that is second coming nowdays), I can say that you are probably an exemption. 99% of them have serious issues to recover even out of a minimal tail-wag. Not that they are alone in that. UK in the snow is an even bigger clown show.
When Waymo's cars are unsure of what to do, they will resort to what a human would do in that uncertain situation. That can't possibly end badly.
Sounds like they've become stumped by the number of situations where their cars don't know what to do, or act incorrectly, but they are looking for the wrong fix. If you have a situation like a confusing construction detour human drivers will often drive hesitantly because we can't always figure out where the stupid construction workers are trying to get us to drive. Or someone has knocked over some of the cones and the path is no longer clear.
I'm sure everyone has seen those amusing pictures of a car that went the wrong way in a construction zone and ended up stuck in freshly poured concrete. Training an AI based on humans that successfully avoided that isn't going to prevent it from happening if the AI is "bold" and doesn't act meekly like they're trying to prevent. Because the people who did that had no doubt successfully driven through similar areas many times in the past before the one time they got it wrong. And wouldn't have got it wrong had they driven more haltingly in a stop and start fashion! Sometimes you gotta do that, if there isn't a car ahead you can follow.
On my drive this morning (in the dark).
A country road, no pavements, grass verge in some areas (not all), no street lighting on lots of it, pedestrians on it walking to work (nearby factory) - some with Hi Viz gear, some less easy to see. All walking in the road.
So lots of use of hazards if a car behind me, indicating and moving out to give pedestrian room (or slowing if oncoming traffic as roads narrow - in some cases "flashing" communication with oncoming driver who is slowing to let you pass pedestrian with plenty of room ).
Can't imagine much AI training on this
Next part of route, housing area , lots of parked cars, so a slalom of pulling in and out, again lots of "flashing" with other drivers to negotiate movements.
.. A thing they mentioned the AI had problems on was AI crashing into parked cars, so obviously not trained on what is (for many people) a very common driving scenario..
Goes without saying no lane divider markings on these roads
A simple drive where a human has no hassles (other than having to allow plenty of time due to the delays that will be incurred due to the slowing / pulling in required and (on the no pavement stretch)
knowing you need to drive well below the speed limit with darkness and some pedestrians (bizarrely) not Hi Viz.
.. And lets not even get onto the many horses you encounter on the country roads at later times of day - a whole lot of interaction with the riders that needs interpretation of facial expression and hand gestures, as you need to go past these wide and slow.
I would like an AI car (I could do something productive or fun as a passenger) but have no confidence they will be up to dealing with "out in the sticks" driving for a very long time.
"On my drive this morning (in the dark). etc etc"
Which is why step 1 is to get cars behave autonomously on motorways. There's still hazards, detours etc but (should be) no pedestrians / cyclists and very limited junctions. It would already be a huge win if I could drive my car to the motorway myself and then switch on self-driving. Have it linked to GPS with planned destination so it alerts driver before the exit and makes sure they are awake and ready to take control as soon as the exit is taken. And add a fuel sensor combined with GPS data of petrol stations along the way to stop when refuelling is necessary.
Off-motorway the conditions are much more complex and infinitely variable, but at least we can automate the simple parts that also tend to be the longest / most monotonous / boring.
..agreed no hi-viz so it doesn't surprise me that those numpties don't know they should be walking on the side of oncoming traffic. You should never have to creep up behind them and overtake. In an electric car they may never hear you and step out right in front to avoid the horse dung recently dropped by the animal being escorted by the little girl from the stables up the road. Not from personal experience much....PP
They trained it on 60 days of data from a previously trained driver and it failed. Where is the revelation there? Take anyone off the street, that has never driven or been in a car, let them ride as a passenger for 60 days, then give them the car. I suspect equal failure rates. If they can train it with 10 or 20 years of driving data, they may get a little closer to success. Even after driving for around 40 years in all types of terrain and conditions I still get caught off guard now and then.
They are asking too much of one AI-car to make decisions.
It would be beneficial to let AI cars share information with each other, to coordinate actions and give heads-up alerts to hazards.
There is safety in the herd. But they must be smart enough to avoid the lemming cliffs.
Biting the hand that feeds IT © 1998–2019