Warning flags
I think it's time to recall the early-days requirement for a jogging attendant running ahead of the vehicle, waving a red flag on a pole to warn of the approaching danger.
The governor of Washington has green-lit the testing of self-driving cars on the US state's public roads, with or without human operators, calling the technology "foolproof." Gov. Jay Inslee this week signed an executive order (PDF) that called for new rules on autonomous car testing and, for the first time, provisions to test …
"Listen car. Did you run over that doggo there then carried on nonchalantly as if nothing had happened?"
"Let me put it this way, Mr. Driver: No 9000 car has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error."
"The problem with your thesis is that code is ultimately written by humans."
Of course, however as there are huge differences in how safe people drive, there are huge differences in how safe people code. The self driving car industry (or whatever of it already exists) has the great advantage of still having a concept that appeals to people. Therefore they still get good people who are sick of recognizing product images to place more efficient ads.
However in the comming years, as more and more companies enter that market, that might fade, and you only have average or even sub average programmers in such companies. Also programmers will realize that, although self driving cars sound cool, any car-based future is likely to fail, simply because we don't have the resources to sustain everyone having a car.
"Of course, however as there are huge differences in how safe people drive, there are huge differences in how safe people code. "
Some of us have driven for many years over many, many thousands of miles without being involved in an accident while others are the IoT security of driving.
Have you ever heard of a politician... who wasn't technically illiterate
"Have you ever heard of a politician... who wasn't technically illiterate
Be careful what you wish for."
Ah, Thatcher. Chemist turned barrister turned politician.
The "scientist" who decreed that the price of a primary source of energy, gas, should be raised to make a secondary/tertiary form of energy, electricity, more competitive.
It only made sense once the privatisation of British Gas was announced.
-- between a blowing plastic bag in the road and a running child in the road?
"If you think self-driving cars can't get here soon enough, you're not alone. But programming computers to recognize objects is very technically challenging, especially since scientists don't fully understand how our own brains do it."
MedicalXPress.com article on perception.
"How does your brain perceive the difference --
-- between a blowing plastic bag in the road and a running child in the road?"
I understand your point, but applying the brakes seems like a reasonable response under both of these circumstances until they really can tell the difference.
Also, I suspect this is generally something that people are not very good at either. Maybe we can't tell. Plenty of children darting into roads get hit. Plastic bags impacts aren't recorded...
Additionally, I'd hope that the software can track 'unknown objects' while they are still on the sidewalk (or when they have JUST left the sidewalk ie. 100% awareness), such that the car could slow down in anticipation of the plastic bag / child running into the street. I.e. less emergency braking.
Also, automatic cars might Actually drive fairly slowly in an urban setting e.g. 15-20 mph. In the UK we have signs that say 20 is plenty. So while I wouldn't want to get run over at 20 miles an hour (fatality 5% chance), it's WAY more survivable than being hit at 30 mph (fatality 45%). And hopefully the car would be stamping on the brakes anyway.
Either way, automated cars erring on the side of caution in an urban setting seems reasonable, and who's to say that people are any better at not hitting children than plastic bags.
"How does your brain perceive the difference --
-- between a blowing plastic bag in the road and a running child in the road?"
I understand your point, but applying the brakes seems like a reasonable response under both of these circumstances until they really can tell the difference.
Well not necessarily. If a child steps out into the road in front of me I'll hit the brakes and pull what ever stunt is necessary to avoid them. If a plastic bag blows out into the road in front of me I'll look in the mirror first. So if that bloody great truck with the driver on his phone behind me is too close to stop before totally me I'll be a little more circumspect about braking. I don't want to die, but I'll die to try to avoid a dumb kid. I don't want to cause an accident for the sake of a bit of wind blown garbage.
We've already had a discussion here about the ethics of a computer deciding between killing a pedestrian (a non-paying human) and killing it's occupants (paying customers), those choices will need to be made and a human driver will make them in the heat of the moment while a computer program will have had to be taught who it should kill in order of preference.
"We've already had a discussion here about the ethics of a computer deciding between killing a pedestrian (a non-paying human) and killing it's occupants (paying customers), those choices will need to be made and a human driver will make them in the heat of the moment while a computer program will have had to be taught who it should kill in order of preference."
Ethics? Odds are that the choices will be made based on factors like who has a premium account with the company operating* the autonomous vehicle(s) involved and who doesn't.
* You're not assuming you'll be able to buy your own robocar, are you? It's going to be "Mobility as a Service" or something like that. Even if you are the sole person using one particular car.
If a child steps out into the road in front of me I'll hit the brakes and pull what ever stunt is necessary to avoid them.
An autonomous car is being tested on the roads of Washington when the Governer. Jay Inslee, for it is he, steps onto the road. The autonomous car takes decides the best course of action is to swerve to avoid him when a child steps onto the path the vehicle would be taking moments later. What should it do?
If it detects a collision is unavoidable, it should go onto damage minimalisation mode and alert the emegency services before impact (if that impact is guaranteed).
Next, calculate the potential injuries. Adult vs child vs driver. Driver has a protective box around them, can we put them into something without hurting anyone else?
What vehicle are we in? A child is going under a car, an adult more likely over it. What's following behind to hit the adult afterwards? Can we hit the adult at an angle that'll knock them to the side (pavement) rather than leave them in the road?
I'd work down those lines. While this is a worst case scenario, it'll still play out better than a meatsack who will instinctively avoid each obstacle as they arise without weighting them.
The more defined the environment the device operates in the easier it is to understand the edge cases. And thus to write software that will be able handle the edge cases. Cars, however, do not operate in a well defined environment. It is likely there are the situations that either not considered or badly handled. Tbe examples you noted either fairly well defined or still have a human in the loop.
Even if all cars were automated, they still have to deal with pedestrians, cyclists, animals etc. that are not automated.
The only way to be 100% certain would be to only run these vehicles in a controlled environment where nothing else could interfere and there is no chance of unpredictable situations.
> Even if all cars were automated, they still have to deal with pedestrians
And if I know that a auto-car is going to stop then I'll just walk across the road. WGAF!
I'm not waiting for no silicon driver. If the resulting emergency stop causes the meat payload to spill their coffee WGAF! to that too.
If the motor-vehicle driving software was subject to the combination of testing, checks and analysis, as well as the acceptance of responsability that I know (having worked on) safety critical medical device software is, I might feel more sympathy for self-driving cars. But my impression is that this software is not carefully planned and subject to FDA style regulations as all medical device software is.
Feel free to enlighten me - what IEC software standard is the AI systems of these self driving cars complying to?
They don't need to be foolproof - they only need to be better than human drivers. And they definitely will be - we just don't know whether that's 1 or 10 or 50 years. Massively better.
When (not if) the risk becomes less than a human driver, then insurance will cost less. If It's up to society to make insurance work, and to stop lawyers trying to make money by sue-ing the programmer.
So, mr. Governor is certain that the robocar is absolutely trustworthy. I wonder what will happen when a robocar inevitably runs someone over - because, statistically, the chance for this is 100% regardless of how good the software is.
Wouldn't it be better to just recognize that a robocar is successful not if it never causes an accident, but rather if it causes *less accidents than humans*?
Foolproof was probably not the word that most describes the sentiment he wanted to put across.
If you read the quote, he was trying to say that an automated car doesn't drive drunk, on drugs, on the phone, tired, distracted by dogs/cats/kids/significant others etc. or without due care and attention. all of which are 'foolish' things for a human driver to do.