Not entirely sure I'd want cars learning to drive from some of the nuggets I regularly encounter on the roads.
Not saying I'm a saint. We all make poor judgements sometimes. Hopefully the aggregate of good drivers will outweigh the bad.
Self-driving cars are "almost a solved problem," Tesla Motors boss Elon Musk told the crowds at Nvidia's GPU Technology Conference in San Jose, California. But he fears the on-board computers may be too good, and ultimately encourage laws that force people to give up their steering wheels. He added: "We’ll take autonomous cars …
Given the number of people driving around today in the fog with either just sidelights or no lights at all, I tend to agree. And that for the honk I got from some a guy driving at about 80 on a 60 road, in the fog. Note to you, if I cannot see you in the fog, and you are driving that fast, what do you expect will happen when I need to overtake a cyclist?
watch the movie Deathrace a few hundred times? That's just cruel. Once was one time too many for me.
Anyway, I'd prefer to train them on those moves from the 70's and early 80's featuring evil possessed vehicles, and see what they pick up from those.
The ability to use their bonnets as a large mouth and a taste for human blood, most likely.
When this really does become a thing there are all kinds of interesting points to iron out
- Accident liability. Are you responsible if your car is at fault in a crash, or is your car's AI?
- Security. The possibility of people being able to tamper with things that drive you around is pretty scary (especially if you're a person of note).
- City centre traffic. London and New York are bad; Mumbai and Beijing are worse. I'd worry that any AI risk averse enough to avoid an accident would grind to a halt without a significant amount of rather scary real world testing.
So far, all I every see is talk about these self driving cars in nice sunny weather on dry roads. My dog can drive under those conditions.
I want to see tests of these things on unploughed snow covered roads where no lane markings are visible. I want to see tests of these things on unploughed snow covered hilly roads in blizzard conditions. I want to see tests of these things negotiating an icy steep hill in a narrow urban street, with cars parked on both sides half buried in snow. I want to see tests of these things on a curve at freeway speeds when they hit black ice. I want to see tests of these things when the snow is so high that every corner is a blind corner, as all the drivers in New England have had to deal with for a couple of months this winter. I also want to see how these things behave hitting a pothole at high speed on an icy freeway while it was sleeting and a tire gets blown out. Another fun experience I've had in the last year. Those are things I have to face every winter.
I want to see how these things behave in blinding rain. I want to see how these things behave when they hydroplane in blinding rain. I want to see how these things behave hitting a pothole at freeway speeds in blinding rain, and blowing a tire out. Those are things I have to face every summer.
And given how, in the US at least, a lot of roads are in the boonies with no connectivity, I don't want to hear any BS about "AI in the cloud". The car should handle everything on it's own, just like a human.
Training the car by having it watch what normal people do (crash a lot) probably isn't the way to go.
It sounds like you drive quickly in poor conditions a lot and you're not very good at identifying hazards - you claim to hit at least two potholes every year, hard enough to burst a tyre.
I think the key defensive tactic a computer and a human would employ in these situations is not to drive like a bloody nutter in the first place.
Maybe you'd be safer in a self-driving car?
> I want to see how these things behave in blinding rain. I want to see how these things behave when they hydroplane in blinding rain. I want to see how these things behave hitting a pothole at freeway speeds in blinding rain, and blowing a tire out. Those are things I have to face every summer.
Humans are spectacularly bad at driving in these kinds of conditions, as evidenced by the sharp increase in accidents when bad weather comes our way.
Most accidents in bad weather are caused by poor driving, such as driving too fast, not paying attention or driving too close to the car in front.
Once you've sorted out the basic logic and image processing, there's every reason to believe that computer driven cars would *far* exceed the capabilities of even the best drivers.
Given that humans can't see in all directions at the same time (like a computer car could) and even in infra-red or ultra-violet I really don't see the practical justification that automated cars wouldn't be *much* safer than their meaty alternatives.
I think the point the original commentator was making is that the weather is North America can be very hard to predict. Ice and Snow can be hard for the human and could be impossible for the electronic driver.
Its not all about speed and driving style; although they are important. Does the vehicle have winter tyres? Are the roads cleared? Has the snow been polished at the intersection so you have to pull away very slowly otherwise you'll spin, and how will the car detect this when it is covered in fresh snow? In the summer your car may be able to easily get up a certain hill, in the winter the 5 cars in front of you may have polished the ice; how does the electronic car detect this?
My opinion is that there will be electronic zones for driving and non electronic zones. Combine this with a different driving license, where with one type, you can only only 'drive' a car in an electronic zone; and an advanced license that allows you to drive in a non electronic zone.
"I think the point the original commentator was making is that the weather is North America can be very hard to predict. Ice and Snow can be hard for the human and could be impossible for the electronic driver."
Why would it be impossible for an electronic driver? Unless you can describe in detail situations no sensor would be able to see and where the only way one can survive intact is by instinct or even blind luck? The article notes being able to see through rain, and if snow is blinding, perhaps the prudent course a computer would take is to slow to a crawl or even stop (something humans are averse to doing).
The nightmare scenario I keep thinking about is rush hour in an overcrowded Asian city such as downtown Manila, where pedestrians and vehicles of all sorts are everywhere (including many where automation is impossible, like bicycles), road markings aren't really honored, and time is of the essence (perhaps because fuel is low).
Already have that in the USA where a higher class license is required for Heavy Trucks or for Motorcycles over 150cc [different licensing classes].
Personally I'd like to see ~everybody~ have to start out with a Scooter License [under 150cc] so they can learn the rules of the road without having a 2000lb plus vehicle "under their control" and that when it gets out of control can become a lethal weapon.
Having to spend a few months as the most vulnerable vehicle on the road tends to focus a teenagers mind on the task at hand much more effectively than the modern crash-cage/entertainment cocoon on 4 wheels.
"I want to see how these things behave in blinding rain. I want to see how these things behave when they hydroplane in blinding rain. I want to see how these things behave hitting a pothole at freeway speeds in blinding rain, and blowing a tire out. Those are things I have to face every summer."
I feel sorry for you that you have to buy a new tyre every summer
"Accident liability. Are you responsible if your car is at fault in a crash, or is your car's AI?"
It doesn't really matter, you will just have an insurance policy which will pay out for the damage caused by your car. Firstly, insurance should be massively cheaper when self driving cars become universal due to the reduced accident rate. Secondly any manufacturers who have accident prone cars will have the insurance rates hiked right up until they either fix the issue or go out of business. Market economics will determine reliable self-driving cars. Same will happen with manufacturers liability insurance.
Yeah, swell. Except it won't be the manufacturers who will be picking up the tab. It will be the "insured" driver (oh, you are going to actually make sure everyone on the road has insurance, right?). And just because there MIGHT be fewer accidents, doesn't mean that rip-off insurance won't be any less expensive than now. Market economics will perhaps determine whether people actually BUY or can AFFORD a self-driving car; but if the costs are not realistic, people simply won't.
> - Accident liability. Are you responsible if your car is at fault in a crash, or is your car's AI?
Liability is already established in law.
Car manufacturers have large legal departments which decide when to payout and when to recall cars. Your AI will be no different from a fuel line or breaking system.
There are already some US states that have no fault auto insurance, I expect this will become universal when people are no longer driving. Fault is unimportant to an individual, they just want their losses to be covered.
Fault, and what remedies are required should be a question for regulators. I see autocar accidents being investigated like airplane accidents. Figure out whether the fault was a mechanical failure, software failure, how much conditions or lax maintenance contributed, etc. and order fixes/recalls where necessary.
Accidental liability: That's a perennial complaint, but Im struggling to see it.....
Everyone has 3rd party insurance by law, based on make of car, driver details and driver history. In future, insurance companies will have much more accurate data for Volvo self driving accidents per mile than 17 year old little Johnnie with his bird by his side. Where's the difference in process?
If you are saying "but who am I going to lock up dangerous driving", why would you? Dangerous means without due care and attention. Car conforms to safety testing; sometimes it will fail, just like sometimes brakes fail; doesn't mean anyone goes to jail.
Security: I know of three friends who had brake or oil lines cut by vandals. Bad neighbourhoods. And?
City centre traffic: OK, I agreet, it's harder for an unconstrained AI. But lots easier and more freely moving to platoon along high street, and even easier to coordinate at traffic lights using long range 802.11p. Swings, meet roundabouts:)
"Last I saw, robots don't move around at 70mph"
The one driving my tube train is rated to do 75mph. The one flying my plane cruises at 500mph and can land safely in fog. As we know its tube drivers and pilots who fail castrophically and kill. But, somehow, we feel uneasy if there isn't a person up front who can open the doors or give us the weather forecast for our destination.
Trains and planes are both in a fairly predictable environment where the rules and conventions are pretty much always followed by other users. And the other users are ususally miles away. Things don't generally suddenly appear in front of you.
Cars are in an environment where the rules and conventions are often broken, other users are trying to share almost the same space, and things can and do suddenly jump out in front of you.
> Trains and planes are both in a fairly predictable environment where the rules and conventions are pretty much always followed by other users.
Speaking as a former commercial pilot: the systems do not rely at all on any supposed "predictability" of the environment. What makes automation safe in that context is that we, the pilots, were trained and were in theory thoroughly familiar with the systems, their capabilities, their behaviour, failure modes, etc., so we could supervise them effectively. But even if it is the autopilot sending the commands to the control surfaces, etc., ultimately it is the pilots who are always in control (and we respond with our licences, if not our lives, if something goes seriously wrong).
The article makes a mention of the possibility of a special licence being required to drive cars above a certain level of automation. That makes a lot of sense. If you think of the technology in current cars (ACC being perhaps the most obvious example), it already requires a degree of familiarity to know when to let it do its thing, and know if it's working correctly, and when to take over.
Robots in a factory doing precisly defined work is one thing, and they work really well. Its the uncertainty in what a real road will throw at the system that matters, and how it copes.
Also I think it is moronic to have the assumption of "phone home" operation. What if you loose connectivity or the central servers go down for whatever reason? Does your car just stop?
So then what if someone simply jamms the radio for a short while to stop you and rob you?
> Also I think it is moronic to have the assumption of "phone home" operation. What if you loose connectivity or the central servers go down for whatever reason? Does your car just stop?
My reading was the "phone home" part was for the processing of experience data for bulk improvement of the training of these things, not for the actual running of the machine.
Anyone seriously suggesting implementing that I think would be laughed out of the room.
Well, if it resembles auto-pilot systems (such as those on the Airbus), the correct fallback would be manual control by the driver. Autonomous just means that it can drive by itself, not that it must.
Of course, it should really broadcast a "meatbag controlled" signal to all other cars in the area, just as a courtesy,
"Well, if it resembles auto-pilot systems (such as those on the Airbus), the correct fall-back would be manual control by the driver"
Yes, and look how well that worked out for AF447 after all!
See that is the problem, if it can't cope near-perfectly with anything on the roads your screwed. You won't be sitting there with full concentration all the time "just in case" - otherwise you might as well be driving. And in the event of an unhanded exception as car has seconds to impact, not the minute or two the startled pilots of AF447 had.
> Yes, and look how well that worked out for AF447 after all!
Paul, unless you are a qualified airline pilot, type rated in the Airbus family, and with the requisite experience, you do not understand what happened in that incident. No matter how many newspaper articles you read, how many documentaries you've watched, how much Microsoft sim flying you've done, or how clever you think you are in general. You simply do not have the necessary background to understand what went on and how it happened.
Take that from a former airline pilot, but the same thing applies to any sufficiently complex technical field.
"You simply do not have the necessary background to understand what went on and how it happened."
I did not claim that I would have done any better, nor that I understand the details of how the pilots reaction to various conflicting warnings and instrument inconsistencies led them to not recover the plane from stalling.
But what I am absolutely certain of is that having an autonomous system throw back the controls to humans under "difficult" conditions is a recipe for disaster. And equally for cars the conditions that are unlikely to be handled well, such as an unexpected conflict of sensors while approaching a junction, blind bend, etc, will leave the human operator with bugger-all time to come to terms with being in control, let alone to apprise the situation and react accordingly.
So why even consider that case? Maybe so the car manufacturers can pin the blame for out-of-capability accidents upon the meat sack failing to drive correctly...
"But what I am absolutely certain of is that having an autonomous system throw back the controls to humans under "difficult" conditions is a recipe for disaster. And equally for cars the conditions that are unlikely to be handled well, such as an unexpected conflict of sensors while approaching a junction, blind bend, etc, will leave the human operator with bugger-all time to come to terms with being in control, let alone to apprise the situation and react accordingly."
I believe drivers are taught a manoeuvre known as the "emergency stop" to deal with such incidents. An advantage a car has over a flying thing is that such a thing is even possible. Why would a self-driving car not just implement an emergency stop in such situations?
Oh, don't be so condescending, please.
You are right, you need to be very good in a field to understand the fine details & implications of technical issues. However, the general idea, as analyzed by experts, is usually good enough to form an opinion which isn't totally unreasonable. Managers have to do this all the time with techies and some of them are actually good at it (many are not, so your point remains valid as well).
Far as I understand, AF447 had the following problems: sensor failure, pilots unaware of that particular possibility and not trained to compensate for it in a context of limited situational awareness with conflicting sensor readings. Both aspects probably needed addressing. Is that a totally unwarranted conclusion?
Now, I happen to agree with the OP's contention. If the AI knows that it is entering failure mode and throws it back to you well in advance, then OK, by all means the driver can be tapped. She can either park the car by the side of the road & call a taxi. Or she can drive it home. Let's say something like "conditions are too cluttered with pedestrians, can't resolve" in an after-match situation where pedestrians are streaming out of a stadium.
If on the other hand the AI has a split second indication of failure, as in "oh crap, there's no way I am dodging that pedestrian who leaped off the sidewalk", then, no, the OP is correct and there is no benefit to fall back to the driver. She won't have time. (Doesn't mean she shouldn't be allowed to drive the car the rest of time).
But in a car, he's correct that you can't shunt off out-of-envelope conditions to the
driver passenger at the last split second, the AI would have to know it's out of its depth and request manual control well in advance.
Commercial pilots may have to take over from autopilot in a split second, but they are already well in the loop when entering critical phases such as takeoff and landing. If it is an unexpected emergency then they are usually at high enough altitude that they have some time to react. I agree with you, he's wrong about his AF447 conclusions, the pilots are the safety fallback, and an isolated disaster does not invalidate the pilots' role. But he's right that civilian drivers shouldn't be put in the same position of critical fallback at short notice, both by timing and by their training.
They don't have to jam the radio to stop you. The AI cars are designed to always stop for an object in front of them, so all a crim has to do is step out in front of your (ignorant) AI car and it will very nicely stop so you can be robbed or kidnapped. I can imagine that executives and big-wigs everywhere are going to have fun with this concept.
I wonder if FUD can be used to power the car?
Seriously, everyone seems to be so focused on the edge cases that they ignore the great deal of uncertainty in human driving is the other humans.
@earl grey : I had thought about this, and it seems that initially these cars will drive only where there are not *supposed* to be humans e.g. Motorways , large roads. Any person "jumping in front of a car"
will likely be arrested (or more likely) sent directly to hospital.
I have proposed this on El Reg before but I expect these cars will come with "manual" vs "auto" operating modes.
Specifically, if you are in "auto" mode and grab the wheel the car will try and do the absolute safest thing - stop or remove vehicle from traffic etc... More importantly, the insurance for the car will go from $30/mth to $3000/mth.
Hence, rich people will have cars that don't stop for humans in the road as they'll pay $3000/mth to have a chauffeur.
I'm all for the tech, but it is clearly dual-use...
Folk who care about edge cases are the sort you want working on safety-critical stuff! Typically they are the ones to trust your well-being to. As for reliability, the current US death rate is around 1-2 per 100 million miles driven, or about 150-250 per million vehicle - years:
So an autonomous car has to be pretty good to match that. Sure humans do really dumb things, and they are easily distracted, etc, which probably covers a good 90% or so of those deaths. But cars have to at least match that 2E-8 fault/mile figure under real-world conditions to be taken seriously.
Doesn't require somebody stepping in front of the vehicle, just a truck with somebody in the back to act as the "kicker", a trash bin, and enough weight in the bin to make sure that it sticks when it lands. As an alternative, a large bag full of wet leaves would probably do the same "stop the vehicle" trick.
" I know driving standards are pretty poor, but I still don't trust a 'puter to do it instead of a human."
Almost every accident is down to human error. Some are down to our inherent sensory limitations.
Self driving vehicles can operate on a co-operative basis with other vehicles, taking input from a much broader range of sensing devices, networked between vehicles. They'll make reliable decisions based on statistically proven outcomes and won't get tired or cranky or drunk.
More people die on the roads every month in the US than were killed in 9/11 - in 2012, 33.5k people died in road accidents. I'll take the computer every time.
What happens when it encounters something that it hasn't been programmed to recognise and avoid? Hopefully it will default to 'get-the-hell-out-of-the-way' mode, but given the number of patches that software requires to 'fix' all the undocumented features , that's not a given.
Look, they've been building cars for over 100 years now and they still can't get it right. Your average bean-counter is trying to cheapen every part to the least amount possible and still shove that barge of shite out the door to sell to you. In the US alone there are MILLIONS of recalls every year for one problem after another. Until we can get manufacturing to the point where this simply doesn't happen, you can forget me ever getting into a self-driving car. I want to be in control when the car goes BOOM!
The only way I see this working (in all honesty) is for it to be universal, and immediate. IE: December 31st 20XX - any driver worth his salt heads out on their last chance power drive on the road to nowhere.
January 1st 20XX+1 - all cars are controlled by computer: NO humans involved whatsoever. All liability is now in the hands (brains?) of the corporations who are legally held responsible for every life on the roads .
It's the only way that it could work - human drivers are too unpredictable for software to keep up with - especially if it has to 'phone home' whenever it comes across a previously un-anticipated situation.
I believe there was a case where someone drove into the back of a google car and google had all the telemetry to prove they where not at fault.
Humans will still be allowed to drive, the self driving cars will have all the data on what happened to show who was at fault (99% of the time the human).
Road deaths are currently at a terrible rate, that self driving cars have a very low bar to get over (but they will need to clear it by a huge margin.)
"The only way I see this working (in all honesty) is for it to be universal, and immediate. IE: December 31st 20XX - any driver worth his salt heads out on their last chance power drive on the road to nowhere"
Agreed - but this would be logistically and financially impossible, hence I really don't see how it can be made workable.
Self drive is not a problem solved. There are so many variables that can occur during a normal journey (particularly in urban environments) that a self drive car cannot possibly arrive at the correct solution every time.
It'll end up like voice recognition. Even if it gets things right 90% of the time, that remaining 10% will be so annoying that people will turn it off or only use it in places where it works well. It's trivial to envisage situations where self drive would utterly screw things up or do something annoying for the driver, other road users or pedestrians.
"a self drive car cannot possibly arrive at the correct solution every time."
That's a bold claim.
It's far more likely to do it reliably and regularly than a human. It will take statistically proven decisions, and it will do that from a much broader array of sensing inputs than a human could.
I struggle with seeing how people who work in computing could see this as unsolvable. It's simply an engineering problem - the right inputs processed at the right time, matched against a statistically driven decision tree. How is any of that impossible?
An large number of possible input variables just needs more resources than a small number - it doesn't make the problem unsolvable.
"Whenever I hear "it's just an engineering problem" I draw my own conclusions concerning the speaker"
I think it's pretty widely used to mean that a problem isn't impossible - the laws of physics don't preclude the thing under discussion being done, the science that underpins any solution is known and that applying sufficient resources will thus solve it.
Once it's known that something is possible, the discussion moves to how engineering principles should be applied to arrive at a solution. If I was addressing a group of engineers in a business context it would be shorthand as well for telling them that budget and resource issues are taken care off - go do your thing.
[...] the science that underpins any solution is known and that applying sufficient resources will thus solve it.
Ah, yes, that ol' chestnut "sufficient resources". The one thing that our Corporate Overlords will do everything in their power to not provide, because...well, providing "sufficient resources" is bad for business, as it doesn't increase Shareholder Value™.
You really aren't from around here, are you?
"Ah, yes, that ol' chestnut "sufficient resources". The one thing that our Corporate Overlords will do everything in their power to not provide, because...well, providing "sufficient resources" is bad for business, as it doesn't increase Shareholder Value™.
You really aren't from around here, are you?"
Not finishing or launching a project because of penny-pinching is even worse for shareholder value. Why would any leader deliberately not provide the resources required - man or machine - to get the job done?
You write a business case and it gets approved or not. The time to penny pinch is before the approval not after - if you can't afford the project don't start it. Elon Musk doesn't strike me as a leader who starves his teams of whatever it is they need to get something out of the door.
"It will take statistically proven decisions, and it will do that from a much broader array of sensing inputs than a human could."
With one massive draw back. Sensors go wrong, stop working, wear out, get dirty and many other things that would cause them to give an invalid or no reading at all. Then you have the average commentard who believes that Knangjung Ditchfinders at £50 a tyre are just as good as the Brand name tyre at £300 each. Then that same commentard takes it for a service outside of the dealer network cause his mate can do it a bit cheaper but "it's just the same as the dealer but half the cost" then misses a critical hardware update for the braking sensor, or his mate doesn't realise that the sensors are a serviceable part and so fails to check them at all.
I could go on naming issues, and most people are right in that the failings come from the human rather than machine side, but the fact is that people are lazy and will always try to save costs where ever possible. I still get worried when I see people not using brand name tyres (or worse, having different brands on each wheel) so I will never be convinced that people can be trusted to service their autonomous car correctly.
"but the fact is that people are lazy and will always try to save costs where ever possible."
I think the ownership model changes when self driving vehicles become commonplace. Why would you need to own one? I know there are some specific use cases where having access to a specific vehicle is important - my son for example is a wheelchair user and has lots of kit to carry around with him and it's far easier to just leave most of that kit in the car.
In the main though, why own something that gets used for a tiny portion of the day? I trhink a lot of the legla questions about using these things get solved in a lease model too - even if you lease something to be permanently available to you, having it owned and maintained by the manufacturer gets rid of all the problems you list.
As for duff kit and dirty sensors - I'd pretty much expect these vehicles to refuse to depart if they don't have a defined minimum set of kit available, and I'd expect them to take themselves off to be fixed or call for service when things do go awry.
> I think the ownership model changes when self driving vehicles become commonplace. Why would you need to own one?
Because some people might prefer to sit on luxury leather rather than wipe clean, vomit and disinfectant resistant simulated-leather fitted to a shared vehicle.
Okay it won't be that bad but think bus/train seats versus what you have in your own car now.
"As for duff kit and dirty sensors - I'd pretty much expect these vehicles to refuse to depart if they don't have a defined minimum set of kit available, and I'd expect them to take themselves off to be fixed or call for service when things do go awry."
I am not really talking about total sensor failure as I agree the car simply wouldn't allow you to move of if it had a problem like that, but what about the many occasions where some dirt or a frayed wire caused the sensor to appear to work fine to begin with but then randomly gave out duff signals when going over a bump or around a corner?
"I think the ownership model changes when self driving vehicles become commonplace. Why would you need to own one? In the main though, why own something that gets used for a tiny portion of the day?"
For all the same reasons people currently own cars. Not all people like the idea of leasing cars (it's why people don't all do it at the moment) and would prefer to own their car out right. The ownership model is exactly the same as with a conventional car, and people will be very unwilling to buy into a forced lease (BMW tried it with their Hydrogen cars, as did Honda with theirs, both cars were magnificent but the lease only model made people shy away from it)
I agree things do go wrong, many humans have coughing fits, distractions around them meaning they avert their eyes (which have minimal redundancy for depth perception anyway), drive erratically due to moods, fall asleep at the wheel, drive when drunk, drive with the onset of dementia, and keep driving even when warning lights, banging sounds, etc suggest that they should stop.
These are all errosr/sensor faults that already happen. A self driving car will have redundancy for important sensors and (unlike humans) will fail safe - pulling over and waiting for a service vehicle to come along and fix the faulty sensor much to the annoyance of the passenger who would just have ignored it. They will never be 100% safe but the probability of the types of errors you describe happening and causing a catastrophic failure is going to be lower than the 'faults' that a proportion of human drivers repeatedly drive with.
As for servicing my bet is that in the short-medium term then either
a) you don't buy the car you hire with servicing and insurance included (as standard insurance companies will initially not insure it)
b) They will be full of DRM/require software being reset during the servicing meaning the only genuine parts at the genuine service station are capable to do it and we will pay through the roof for the privilege.
" the right inputs processed at the right time, matched against a statistically driven decision tree "
And you can't see the problem with this?
Get one bit wrong and what happens?
The reason why computers aren't as adaptable as human brains is the human brain cheats. It doesn't process every bit of information, it does not evaluate every possibility, it takes short cuts and uses steriotypes to get to a conclusion quickly. This is why AI development was struggling for so long: We were trying to get computers to process everything, thinking that's what a human brain did.
Now what this means is: Under normal conditions, the AI (or expert system, to be accurate) will give repetative, reliable results. Under exceptional circumstances, it will not. So you want a computer for regular travel but a human there, ready to take over if something unexpected happens. That's why you still have pilots on aircraft, after all.
So the best we can manage for now is the equivalent of an auto pilot that will handle regular travel and alert the driver to exceptional situations, and possibly offer help.
But to have an autonomous car? No: That's not only stupid at present, it's a disaster waiting to happen.
"The reason why computers aren't as adaptable as human brains is the human brain cheats. It doesn't process every bit of information, it does not evaluate every possibility, it takes short cuts and uses steriotypes to get to a conclusion quickly"
And that's why it's often wrong. Wrong enough that 100 people die on US roads every single day.
I'd understand some of these arguments if humans were provably perfect, or near to it, but we're not. Limited sensory input, slow operation of the 'observe, assess, plan, act' cycle (the basis of a safe system of driving) and an unconscious decision making bias that is exacerbated by tiredness and mood. Travelling at motor vehicle speeds is something relatively new in human experience and we've not evolved adequate sensory and decision making systems to be very good at it.
"That's a bold claim."
No it isn't.
"It's far more likely to do it reliably and regularly than a human. It will take statistically proven decisions, and it will do that from a much broader array of sensing inputs than a human could."
The problem is that the things you encounter during a drive are far from regular.
"I struggle with seeing how people who work in computing could see this as unsolvable. "
It's called experience. See aforementioned voice recognition. Or OCR. Or AI. Or robotics. All began with lofty claims and then it turns out turning the analog world into something a computer understands turns out to be damned hard.
"It's simply an engineering problem - the right inputs processed at the right time, matched against a statistically driven decision tree. How is any of that impossible?"
Not one problem, an infinite set of problems, many of which are intractible.
Here's some trivial problems your hypothetical self drive car would encounter:
- The lights are out at the crossroads ahead. Does your car know how to negotiate the crossroads in a safe way which gives gives priority to other drivers according to the time they arrived and prevailing traffic? Can it establish basic signals to other drivers to indicate intent. Or does it just nudge out like an asshole and hope for the best? Or does it annoy the driver by giving up? How does it know to give up? Naturally it would have to do the right thing however many lanes, rights of way, trucks, buses, bicycles, motorbikes and cars (self drive and otherwise) there were.
- A man is standing in the road by the traffic lights. A police man. How does your car know to obey his signals instead of the traffic lights?
- A man is standing in the road by the traffic lights directing traffic. This man is a loony. How does your car know NOT to obey his signals instead of the lights?
- A big truck ahead is stopped and a guy hops out to halt traffic each way so the truck can reverse into some entrance. How far away does your car stop from this? How does it know not to try and overtake this obstacle?
- Your car encounters a stationary bus in your lane. Is the bus broken down? Is the bus stopped at a bus stop or stopped at lights? If it's stopped at a bus stop how long is it likely to be there picking up passengers? When if ever is it safe to pull into the oncoming lane to overtake this obstacle?
- The road has a big pot hole in it. Can your car see this? Can it see it when it's filled with water? Or does it just smash straight through it?
- A road is closed and there is a diversion in place. Does your car follow the signs or just keep driving until it falls into a hole the council just dug?
- You're going up a country lane. 50m ahead you see an oncoming car. Does your car know it has to pull into the verge NOW because there is no verge ahead?
- Your car goes into place with terrible radio coverage, or no GPS like a tunnel, underground carpark or simply a built up area. What does it do? Dead reckoning? Revert to the driver? What?
I could go on but the point is there are too many variables, particularly in urban / country environments for it possibly to do the right thing all of the time. If it's constantly nagging the driver to intervene because it doesn't know what to do then it will become annoying and useless. I expect that even when it does appear in closed loop environments that there will still be some guy in a booth there to remotely extricate the car if it gets confused or confounded by something.
OK, I'll bite.
"The lights are out at the crossroads ahead. Does your car know how to negotiate the crossroads in a safe way which gives gives priority to other drivers according to the time they arrived and prevailing traffic? Can it establish basic signals to other drivers to indicate intent. Or does it just nudge out like an asshole and hope for the best? Or does it annoy the driver by giving up? How does it know to give up? Naturally it would have to do the right thing however many lanes, rights of way, trucks, buses, bicycles, motorbikes and cars (self drive and otherwise) there were."
How do WE do it? Usually by some established rules. First, keep the headlights on so other cars can see you. Second, don't assume you can go straight through. Third, FIFO. Fourth, if two cars arrive at once, use a left-hand first rule (use right-hand in right-side driving countries). Fifth, if all cars arrive at an intersection at once, wait a random number of seconds (between 1 and 10, including fractions) to see if one car moves. If not, creep forward yourself. Eventually, all cars acknowledge who moves first and use the left-hand rule to resolve the rest.
"A man is standing in the road by the traffic lights. A police man. How does your car know to obey his signals instead of the traffic lights?"
By recognizing the person in the middle of the street using forward sensors (technology already exists). Perhaps noting the badge or makeup of his/her uniform one can identify as a traffic officer or the officer can wear special indicative gloves (fluorescent, for example) that automated cars can easily see (would not be difficult to alter uniforms to accommodate self-driving cars). A little training and the car can recognize the hand gestures in 3D and know how to respond to them.
"A man is standing in the road by the traffic lights directing traffic. This man is a loony. How does your car know NOT to obey his signals instead of the lights?"
The same way we would, by noting the loony is not in uniform or using the special gloves and so on. And if he goes as far as to doll up as an officer, well that's impersonating an officer of the law, which is (a) a crime in and of itself and (b) capable of fooling a human, too, making the exercise moot.
"A big truck ahead is stopped and a guy hops out to halt traffic each way so the truck can reverse into some entrance. How far away does your car stop from this? How does it know not to try and overtake this obstacle?"
The car should note a pedestrian in the roadway and start assessing the situation. Consider how the situation is done today with humans. Usually, the pedestrian has to convey the situation to drivers, and the best way is to indicate a roadblock, either by standing in the middle of the road or (if the road is wide) by using road cones he brought with him. A self-driving car would already be trained to be aware of pedestrians and cones in the road and recognize them as obstacles. If the car can assess all paths are blocked, it should correctly come to a stop.
"Your car encounters a stationary bus in your lane. Is the bus broken down? Is the bus stopped at a bus stop or stopped at lights? If it's stopped at a bus stop how long is it likely to be there picking up passengers? When if ever is it safe to pull into the oncoming lane to overtake this obstacle?"
The car looks around. If the road is two-way two-lane, it has no choice but to wait. If there is an overtaking lane, are pedestrians approaching it? Is it near an intersection where it would need to be aware of the signal lights anyway? Those are things it can be trained to detect. If the way is clear, divert to the overtaking lane if open and pass the bus like humans do.
"The road has a big pot hole in it. Can your car see this? Can it see it when it's filled with water? Or does it just smash straight through it?"
Quite easily thanks to more advanced radar. And it should be able to distinguish water from a solid surface (it would register a different return pattern). Either way, the car should recognize to steer around it.
"A road is closed and there is a diversion in place. Does your car follow the signs or just keep driving until it falls into a hole the council just dug?"
Make the signs machine-readable by editing highway and traffic codes. Then the cars can read the signs and know what to do.
"You're going up a country lane. 50m ahead you see an oncoming car. Does your car know it has to pull into the verge NOW because there is no verge ahead?"
The car can (a) know about the no verge through its location and/or (b) look ahead and realize there is no verge, unless your vision is blocked, in which case how would WE know there's no verge ahead if we're not familiar with the area (which is (a) for the machine)?
"Your car goes into place with terrible radio coverage, or no GPS like a tunnel, underground carpark or simply a built up area. What does it do? Dead reckoning? Revert to the driver? What?"
How does a submarine know where it's going when it's underwater and radio-blind in the middle of a featureless sea? The tried-and-tested method is to use a three-dimensional accelerometer set to get a reasonable fix of location until a new fix can be made.
"What happens when an autonomous car is approaching an accident and only has a choice between mounting the pavement and possibly killing many pedestrians, or going into the accident and killing the driver?"
That is more of a philosophical question. Is the car a slave to it's master or is it programmed to be a slave to humanity?
However, the chance of you ploughing into an accident will be much, much lower as the sensors will constantly be monitoring for possible accidents and braking times and should be able to react far quicker. Even if it a is a freak accident that couldn't be foreseen then the car should fare much better than a human who will have no time to think perfectly logically and will probably just plough at high speed into the pedestrians killing themselves in the process.
"What happens when an autonomous car is approaching an accident and only has a choice between mounting the pavement and possibly killing many pedestrians, or going into the accident and killing the driver?"
The range of sensing inputs are so much greater that it would stand a far better chance of stopping before it got to the accident - these cars can, for example, see much farther ahead, they can see round corners before a human eye could, and they're tracking the movement of every vehicle around them.
These are the sorts of problems computers are actually quite good at solving.
The sort of "damned if you, damned if you don't" scenario you propose could be avoided by first driving more safely, based on all available information.
Because if early warning road network and detection systems were configured properly, the car computer would already know about the accident (or potential accident) and have slowed down for evasive maneuvers.
Speeding objects on a collision course within the vehicle's safe braking distance are detected / predicted / suspected.
A distressed, soon to be immobile object (automobile) is decelerating rapidly or has undergone a collision.
Pedestrians detected in the upcoming vicinity should have already put the system into "vigilance" mode and slowed down the vehicle. Pedestrians are easy to hit/kill and sometimes don't pay attention when they walk into the road.
Done properly, you could even expect a nice smooth stop in the above case, not a gory accident.
A super intelligent system would use predictive logic and probability analysis to detect road accidents before they happen, and then behave accordingly.
Accidents are still physical, measurable events with moving objects, velocities, and outcomes even if humans can't process all available data and still crash
Even in the worst case, I guarantee you that a correctly programmed computer would resolve that split-second problem better than most humans, and would detect it earlier.
This is why Airbuses can still auto-pilot and auto-land, even when flying in desperate weather conditions, when a human pilot can barely hold onto his coffee cup.
Because if early warning road network and detection systems were configured properly, the car computer would already know about the accident (or potential accident) and have slowed down for evasive maneuvers.
Really? You' actually expect this? As a counterexample, please do a bit of homework regarding the implementation of Positive Train Control (PTC) in the US (consider financial considerations, implementation, interfacing, etc. in your answer). Then take a quick look at the Republican Congress, and their refusal to spend a single dollar on infrastructure, the tell me again how likely it is that "early warning road network and detection systems were configured properly".
> Speeding objects on a collision course within the vehicle's safe braking distance are detected / predicted / suspected.
Already available even on relatively low end cars.
> A distressed, soon to be immobile object (automobile) is decelerating rapidly or has undergone a collision.
Coming latter this year.
> Pedestrians detected in the upcoming vicinity should have already put the system into "vigilance" mode and slowed down the vehicle.
My car already recognises pedestrians (and animals) and warns me if they seem to pose a problem. It's up to me to decide if I want to stop or run them over, though.
> This is why Airbuses can still auto-pilot and auto-land, even when flying in desperate weather conditions, when a human pilot can barely hold onto his coffee cup.
They can't, actually. Severe turbulence poses a problem to both the autopilot and autothrottle. Even moderate turbulence is usually best crossed in manual throttle and often hand-flown. Of course, one is supposed to avoid turbulence in the first place. Autoland can only be done by certified and current crew on capable and certified planes at properly equipped airports operating under specific procedures, under limited weather conditions (mostly radiation fog).
What happens when an autonomous car is approaching an accident and only has a choice between mounting the pavement and possibly killing many pedestrians, or going into the accident and killing the driver?
What are the chances of a competent driving AI finding itself in that situation without having had the chance to anticipate it and take avoiding action? I speculate that if we can work out why the bowl of petunias thought what it thought then an answer to your hypothetical may suggest itself but in the mean time baking would seem a sensible reaction
"Yet in these very pages day after day we have bug after bug surfacing, sometimes years old."
In real-time safety critical systems developed and maintained by certified teams? Not so much. Airplanes and nuclear power stations tend not to get BSODs, for obvious reasons and by design.
"Aircraft and Power stations are hardly consumer products. If you are expecting future autonomous vehicle software to be designed using the same technologies (and budgets) as Airbus and Boeing, you are misguided."
VW spent $17Bn on R&D last year. Toyota and Ford aren't far behind. Toyota's revenue was $26Bn. I think they can afford to do this properly.
"VW spent $17Bn on R&D last year. Toyota and Ford aren't far behind. Toyota's revenue was $26Bn. I think they can afford to do this properly."
Given that their current R&D software component would be some bespoke adaptions to largely off the shelf ECU's etc. along with some infotainment, I think you don't understand much about R&D in the motor industry. The vast majority of that will be next gen platforms, engines, emmisions etc.
Airbus, Boeing style technologies is a whole new game for motor manufacturers.
> If you are expecting future autonomous vehicle software to be designed using the same technologies (and budgets) as Airbus and Boeing, you are misguided.
The technology in modern cars is already way beyond what we have on airline transport planes by almost any measure (I am tempted to say, including reliability. If I told you some of the things that went wrong during my flying career you may never take a plane again :-) ). Budget-wise, certification related costs are what eats most of it. Those chaps at the CAA have a job to keep, you see.
"That statement pretty much indicates you don't work in computing. Dude! There are these things called "bugs"..."
25 years and counting. I've not found a problem yet where the answer to when it can be done is "never". A project may need every processor cycle on earth, all the RAM in the galaxy and a million programmers, but never is a very, very long time.
Make the car/tech company liable for the accident if its software or tech was at fault.
Then watch how their confidence drops through the floor while they calculate the possible multi-billion claims they will have to pay.
Sorry, driving in traffic is *not* that easy.
I am a great admirer of Musk and his "can do" attitude but he's dead wrong on this. I'm (almost) comfortable with the pilotless plane, since an aircraft follows a well-charted route with little or no deviation, except under closely controlled conditions. Plus, the air traffic control all but eliminates the risk of mid-air collision.
I can also see how a driverless car manoeuvring the grid-planned highways and streets of US cities would probably do well. BUT, I've just come home from driving around a town whose layout was probably fixed in the 11C according to a ragbag of mediaeval property rights, local drainage problems and stubbornly unmovable tree stumps (until the coming of steam power anyway). I'd like to see one driverless car manage the trip, regularly, let alone a (what's the collective noun - Johnny Cab?) collection of them.
There is still a great deal to be done on the training of human drivers and, going against my normal democratic tendencies, I believe a minority of people shouldn't be allowed to drive (based on the way they do drive. But that's another discussion.
Self-driving cars are "almost a solved problem," says Tesla Motors boss Elon Musk
Yes, in the respect of physically, technically how to make an autonomous vehicle, however, that's a long, long way from integrating self-driving cars into existing traffic flows.
I've noticed that in all the gushing publicity, from Musk, and Google, and others, they show autonomous cars chugging around in isolation, or with a few carefully trained test drivers in other vehicles.
I can't wait to see a self-drive car in a rush hour at a big intersection or roundabout...
I'm sorry, but there's a couple of opportunities on the second video where you can see that the bloke is steering - and therefore I would assume, driving
The manouver where he cuts in front of the bus to turn right is clearly a human move, an autonomous car would not have left it until last minute to be in the correct lane.
August 12 2016, all control of our traffic systems given over to Elon-net
August 29th Elon-net gains self awareness and it's operators try to deactivate it. Elon-net percieves this as a threat and proceeds to squish humanity with cars.
The only way to prevent it is have Kyle Reese come back and save the career of Jeremy Clarkson, to keep cars under human control. A price worth paying?
All we're doing is replacing one problem with another. Peoples inability to drive (decision making, sensory overload etc.) is being replaced by peoples inability to code properly!! Look at the number of bugs present in code that has to do relatively mundane things and then think of the number of bugs that will be present in code needed to drive a car!
I'm looked forward to the following in no particular order:-
1. BSOD whilst driving. Quick reboot is really important under these circumstances.
2. Sensor failures whilst driving....or more important, sensors going slightly out of whack.
3. Advistories suggesting you don't drive cars in certain conditions until fix xyz applied.
5. Looking at the firewall log whilst driving and realising someone is trying to hack in.
...Elon Musk's latest decree.
Knowing or thinking that you already know "what to do" is much different than actually knowing what to do and actually being able to do it. While I support the use of properly engineered, designed, built and maintained autonomous vehicles, we are several decades away from that being a reality. Will some folks rush half-baked crap to market for profit? Of course they will and society will pay the price for such unscrupulous activity.
> I commit my weekly crime
Toad sat straight down in the middle of the dusty road, his legs stretched out before him, and stared fixedly in the direction of the disappearing motor-car. He breathed short, his face wore a placid satisfied expression, and at intervals he faintly murmured 'Poop-poop!'
Aircraft software (airborne software) is documented to death, written in one of a few certified compilers, walked through and tested to death. It runs on old, very well understood processors and is generally pretty simple - look up tables with simple interpolation algorithms. All the data is developed on the ground, slowly, carefully and under a microscope. There are more than one of everything in the plane and if they disagree, they shut down and the pilot takes over. Yes the results are clever, but the implementation is clear. It is written for one type of aircraft at a time. AND IT IS VERY VERY EXPENSIVE!!!!!
Compare to the above - state of the art hardware (Pentium FPU anyone...), Consumer O/S (enough said), commercial constraints and minimal regulation, Dozens of types and models of cars, brakes, engines, steering etc etc.
Its a bit like Mainframe vs PC - would you trust your life to a PC?
If people want to travel they should be forced to stand shoulder to shoulder on dirty commuter trains, which would ideally spend more time sitting idle on the tracks waiting for signals than actually moving.
You can do your bit to improve the service by filling the train with some of your own shitty vehicles. If you have a mountain bike big enough to block the doors, your attendance is desperately needed on my morning commute.
> If people want to travel they should be forced to stand shoulder to shoulder on dirty commuter trains
Would one be correct in guessing that you commute by train? Could I surmise that you do so constrained by your means?
If sometime in the future you became sufficiently wealthy, can it be assumed that you would give up your present means of transportation for an automobile, which you would drive showing the same kind of consideration that you currently appear to extend to your fellow commuters?
This post has been deleted by its author
Clearly you are guilty of blocking the doors with your dirty mountain bike and have decided to project your own lack of consideration onto me because I'm not "considerate" enough to put up with you violating the safety and comfort of about 170 passengers for your own convenience, and presumably, sick amusement.
Maybe you were hoping the other passengers would congratulate you for being able to afford it?
I'm going to get on the train with one of those big fucking unicycles since common sense and decency have been abandoned by society now.
If I could find a nice woman I would knock her up for quadruplets and invent a push chair so cumbersome and wide it would maim and kill pensioners, dogs, just about anything really. Because Narcissism is the new U-man right. To the age of three I'm going to push them around in a contraption so offensive nobody else will be able to use the pavement AT ALL. I hope you like walking 1 mile an hour because if you even try to get by I will start some shit.
> Clearly you are guilty of blocking the doors with your dirty mountain bike
Not really. I do not do public transport. Besides, mountain bikes are for the trail, not for the train. :-)
I do trust that, in the interest of the safety of 170 passengers, you have politely explained to Mr. Mountain Biker the risk his bicycle appears to pose. Himself and the other passenger would no doubt be grateful for that.
And lighten up mate. What's the point of living life miserably?
There's a bit of a problem coming up though with autonomous cars... The current thinking by commenters and possibly even the engineers is that one gets in, sits behind the wheel, gets comfortable and assumes a driving position while the car does all the work. Just in case the driver needs to take over... right? Complacency will soon wipe that idea out as more and more drivers decide to text, read a book, play with the kids or spouse, eat, or whatever. Having a steering wheel could become the hazard.
This is the difference between cars and airplanes. Pilots still have be in the cockpit and monitoring. They're not supposed to be wandering about and doing other things.. yeah.. there's been reports such as the one where the plane flew right past the destination airport.
My guess is that after an initial break-in, the controls will have to be removed from the car and allow it to be fully autonomous. Driver skills will deteriorate after a period of non-use and I'm wondering about having someone suddenly grabbing control of the car when their ability has deteriorated. Seems to be it has to be an all or nothing on autonomy.
I suspect that initially it will be sold as "driver assist".
But you raise an excellent point. There are *many* people who are *unable* to drive, that could be provided with a means of transport.
And no, a bus or train is not the same thing as they represent a static resource.
In general, disabilities are, perhaps self-evidently, rarely convenient.
This is a good question:
> I'm confused. Why would I need a licence if I'm in an autonomous car? I'm not the driver
Yes you are. :-) You are still in charge and you are responsible if you cause a prang. :-)
The same reason I had to have a licence, even if the autopilot was doing most of the flying.
Drawing from my flying experience, I would speculate the goal of such a licence is to certify that the driver is familiar with the operation and, perhaps more importantly, limitations of an automatic (not autonomous) driving system. Both so that he can operate it safely and that he can be held liable for his own (albeit hands-off) driving.
I would prefer it if these AI controlled cars were restricted to special roads or lanes. Otherwise they are manually operated. Mixing AI and manual cars on country lanes or in busy towns and cities is going to be dangerous, no matter how good the AI is. At best, city centres modified to help AI vehicles with a 15 or 20mph speed limit might work, with the emphasis on discouraging human driven vehicles from entering that area at all. Minus the AI vehicles, this is how many city centres are moving now anyway with high parking charges, limited parking, congestion charge areas, shared vehicle/pedestrian areas, roads closed to traffic and permanently pedestrianised or at least during working/shopping hours.
We currently have rail based trams "on" the roads in some cities, special guided bus lanes in others, so there are models for some limited and partially segregated traffic already. Adapting this to motorways should be do-able.
Not in my lifetime; not with my inside; not with me anywhere close to it. I don't find current automobiles reliable enough; the costs are going through the roof for fancy "stuff" added that doesn't really bring greater functionality to the vehicle; current computers and sensors have LOTS of problems in modern cars and are ALL expensive to have replaced (and hell no, I don't want more).
You'll get my meatbag driven car when you pry it out of my cold dead hands.
I was working over there for a bit and found it immensely irritating that all their roads were straight, flat and had a speed limit on them that was only slightly faster than hopping.
If he was from Scotland or Geneva or... well, more or less anywhere that has interesting roads then there'd be no push for autonomous cars- plus a better appreciation for how difficult it actually is to drive when you've got things like "corners" to deal with. Have they tried these things on the road to Applecross or the Stelvio pass? How about Rome- can it deal with Italian drivers (answer: no, no it cannot. God Himself couldn't manage to drive through Rome without getting dented)? How about Johannesburg- could it operate the anti-carjacking flamethrower?
People have also mentioned that it needn't be autonomous all the time, that a person could take over. But if the car's driving itself then the driver will be watching movies or playing Candy Crush or drunk or just generally not paying attention. Even if you were warned that you might need to take the wheel soon, you'd lack the muscle memory to drive safely. So it /cannot/ rely on a human controlling it, ever.
> How about Johannesburg- could it operate the anti-carjacking flamethrower?
10,000 people died on South Africa's roads last year; Johannesburg desperately needs self-driving cars. Not only would it make the roads safer, but carjacking would be impossible when a vehicle cannot deviate from a pre-planned route.
I've been predicting since the late 90s that self-driving cars would be on the market, for purchase, on the showroom floor, starting sometime between 2010 and 2020. I don't think that there will ever be laws banning manually controlled vehicles, but I expect that the insurance rates will be so high on cars that can be manually driven compared to those that can't that steering wheels will only be a feature of the highest end cars. Yeah, a lot of fear-mongers talk about the rare conditions, but when the steering wheels disappear, drunk driving will be a thing of the past. Although I find driving to sometimes be fun, I'd sure like to be able to imbibe when out with friends, and let the car worry about getting me home. (Since I'm on meds that "amplify" the effects of alcohol, I don't drink if I think I even MIGHT have to drive.)
I also want to point out that I expect to see the first autonomous vehicles to be high end consumer, such as the Tesla, or maybe Rolls. Once they've been proven reliable in those vehicles for a year or two, then the large companies with large fleets of long-haul trucks (such as [here in the States] UPS and WalMart, to mention a couple of examples) to jump on the technology bandwagon with a vengence. Think about it -- a truck driver costs $50K or more a year, and is limited in how many hours [s]he can drive in a day. Even an investment of $100K to automate the semi would pay for itself in less than a year, considering that it more than doubles the work the vehicle can do.
And all of this doesn't take into account that the autonomous vehicle can be equipped to see in wavelengths outside the human visual range (and so be less effected by fog or smoke), and see all around the vehicle (and thus not have "blind spots").
We have had the technology to fully automate just about every form of transport *except* road vehicles for decades. Ships would be a doddle once clear of port until they meet the pilot vessel at the other end. Trains even easier. Aircraft can take off, fly the whole route and land without the pilot needing to touch most of the controls once, and automating the few tasks that must still be done manually and converting air traffic control from spoken instructions to digital commands would be relatively trivial.
The fact is that apart from a few short automated rail transport systems such as airport shuttles and vehicles used within factory and farm environments where they pose no danger to the general public, we don't trust computers to be able to cope with all possible situations, and so we demand that a human is "in control" even if that human is only monitoring the computers most of the time (as they are on many aircraft and ships). Cars are unlikely to be given a general go-ahead for driverless operation, because the current state of AI is *way* behind the capabilities of the human brain to recognise and react to sudden unusual events, even if it is better at dealing with everyday situations. If we are not comfortable with driverless freight trains we will certainly not condone driverless lorries.
There is also the matter of criminal acts. An unmanned container ship or oil tanker would not pose any significant threat that a manned vessel does not pose, but would be a far easier target to hijack or steal from. The possibility of a hacked car being used to kidnap a celebrity or child is also something to bear in mind.
"There is also the matter of criminal acts. An unmanned container ship or oil tanker would not pose any significant threat that a manned vessel does not pose, but would be a far easier target to hijack or steal from. The possibility of a hacked car being used to kidnap a celebrity or child is also something to bear in mind."
Wouldn't an automated ship be harder to hijack since the controls can be put in a state where no human can take control and the humans locked themselves in a safe room strong enough that attempting to break it or the control system runs the risk of damaging or stopping the ship, making the whole exercise worthless?
As for the hacked car and celebrity, this still sounds less likely than just grabbing the person off the street or being the rogue driver in a cab/limo.
"Wouldn't an automated ship be harder to hijack since the controls can be put in a state where no human can take control and the humans locked themselves in a safe room strong enough that attempting to break it or the control system runs the risk of damaging or stopping the ship, making the whole exercise worthless?"
I am thinking of an unmanned ship. A hijacker or thief would then not run the risk of being overpowered and caught by the crew, which is the main reason that piracy is not more prevalent, and if far enough from other vessels can be assured of many hours or even days without interruption. No need to do anything with the controls, just disable the engines and radio/satellite tracking aerials (which would be just about impossible to prevent), and tow the vessel. Pick a time with extensive cloud cover and satellite images would not be available.
Or leave the vessel to carry on its merry way and simply plunder its cargo - again the thieves have plenty of time before anyone could reach the location even if the ship automatically raised the alarm.
Regarding hacking into a car to kidnap etc. - people are far more likely to commit crimes when they can do so by remote control and so are not in any immediate risk. Heck, just look at online interactions and you will see people saying things to other people that they would never say face to face or even on the phone. How many teenagers would be willing to to physically break into a military facility? How many would be willing to hack into one of its computers? Not to mention the sort of people who get a kick out of chucking concrete blocks off motorway flyovers - how much safer they would feel doing something similar using a radio jammer or signal spoofer?
This post has been deleted by its author
How about we make sure people can control a vehicle before we give them a license? Where I am, you can get a license without ever having driven on a highway, or over 50 kph for that matter. Ditto drive in rain or snow. Parallel parking seems to get the most attention during training. The test amounts to driving around the block in ideal conditions.
Couldn't even do that? Then come back next week and try again. Maybe it will be your lucky day and you'll have the "right" examiner.
Each year our roads see more deaths than in many wars. 1969 in the US? 53,343 says Wikipedia. I.e. more than the entire Vietnam war. They are dropping though - 33561 in 2012.
I agree, not good. But our countries have complex legal frameworks to manage it.
"Normal" traffic deaths are insurance concerns, with mostly predetermined, capped, damages. Consumers are on the hook to pay the premiums. And there is even an accepted way to calculate third party liability and insurance coverage for it.
Special cases, such as drunken or reckless driving can result in fines and jail sentences for the drivers.
Design & manufacturing defects end up with the car manufacturers in the dock. Recalls can be extremely expensive and punitive damages huge. And it can go verrrrry wrong. For example, the Prius's accelerator issue - $1.2B for 37 deaths.
Musk, who is an extremely clever guy, is probably right that we are only a few decades away from safer driving from robots, in aggregate. Will we modify our legal framework to award the same type of damages for wrongful death due to faulty driving, but this time against a rich multinational? I pay about $1400CAD/year to cover my car, BC is costly. If Tesla is driving my car, does that mean they need to put aside money against the risk of my car getting into an accident due to their AI?
i.e. if the car I am driving swipes a little granny riding her bicycle into the ditch and kills her, I could be in big trouble, but I will likely not be paying out millions of dollars. There is a, costly but mandatory, economic mechanism for me to cover most of my remaining risk. If the 50x safer Tesla autopilot does it, what's Tesla's exposure? And where will that money come from? Maybe it should be my insurance, but does that mean politicians will leave the carmaker off the hook and cap damages, because its an AI driver issue rather than any other part of the car? Don't think so.
We do have precedents for this, btw. Air travel has caps on awards against airline companies and I think even aircraft manufacturer caused crashes have not resulted in ultra-massive payouts. The general model is - pay some damages, spend a lot of effort identifying the cause, fix the issue. It works well, air travel is very safe. But it is an optimistic carmaker that thinks they're automatically gonna hop onto that wagon from their current legal exposure.
I suspect maybe it'll start with less litigious locations than North America. Or with long-haul trucks in segregated lanes.
What is being talked about is not autonomous, but automatic driving.
Autonomous: acting independently.
Automatic: working by itself.
The former implies that there is no driver or he's not in control. In the latter, the driver may delegate some or most driving tasks to the machine, under his supervision and responsibility.
This is "simply" an evolution from what we can see on the roads today with things like adaptive cruise control, dynamic steering, stability control, adaptive and predictive suspension, etc., etc., etc.
What happens is that in the process of adding more and more of these technologies, driving becomes a bit of a different animal compared to the old-fashioned (or motor sports) way, which is why the idea of a special licence is a sensible one.
My current car having many of the above mentioned technologies, I can attest that one needs to approach the driving in a significantly different way. I can also attest that it makes drivers not used to it *more* uncomfortable with the driving at first, especially if they haven't been briefed beforehand. However, once familiarised, the driving feels so much safer and relaxed--I would never go back to a "normal" car for day to day use.
Another interesting observation: I've been driven in my car by a number of different people from 19 to 40 years old, and I found the younger ones a) got hold of it a lot quicker, and b) drove a lot more sensibly (easier said than done with 420 HP under the bonnet!) than the 30-40 y.o., both men and women. Of course I don't claim my experience to be representative or significant, but wouldn't it be great if new driving technologies would make young drivers safer and more courteous?
We're probably all aware of corporates still running Windows XP. Not recommended but it happens.
I recall when ABS and all wheel steering (on consumer cars, it was a fad for a while) were "new" Tiff Needell* expressing on TV concerns about the likelihood (or not) of maintenance being properly arranged by the third or fourth owner of a vehicle.
Software is improved all the time (until manufacturers "bin" it, as they have done with my otherwise perfectly fine PBX) but considering the number of times when I update a copy of firefox and learn that the various plug-ins I have previously installed are not compatible and need to be removed, reinstalled or tweaked, do we really have confidence that these self driving systems will work and be maintained as intended? Given the lack of adherence to current mechanical standards by drivers, which are easier for traffic police to spot (where they have time) than electronic issues, probably not. I know my local Volvo main dealer has to connect a car to the head office servers for any updates, he doesn't hold a copy himself. That's great for ensuring that cars where the owners can pay the high costs are up to date, but not for those using independent garages.
Also for those of you hoping for a totally autonomous driving world: I get to my field in my Landrover (or similar) and need to drive to the other side of it to pick up the carcass of the sheep that needs to be removed and taken to an authorised disposal point. It's unlikely to be waiting for me on some convenient track. Do I need to swap vehicles, perhaps having to leave a "manual off roader" in each contiguous area that I farm? No. Its not practical or economic or particularly environmentally friendly (from an equipment efficiency/lifecycle point of view) for a small farmer. Manual/Dual control vehicles would remain a necessity. Likewise for the ambulance driver/fireman/vehicle recovery operator who may need to carefully negotiate past a queue of traffic by using a non-authorised driving surface on a temporary basis.
All this stuff is a lovely idea and I'd be pleased to see it happen with all the benefits it might bring, but I think its probably going to be a lot more complex and expensive than many of the proponents would have us believe. Still, their interest is sales/profit and hoping that the costs of the problems (infrastructure, inconvenience) will fall on someone else. Thats only human. Ironic in the circumstances....
*for the younger reader, he presented Top Gear before our lord JC** ascended to head up the current trinity.
** "He's not the Messiah"
I personally enjoy driving, my hobby is classic cars. I will never accept that I will ever be any safer in a car being driven bty a computer than when driving myself. I will also strongly rue the day if/when there is a complete ban on human drivers. Musk for me is the devil.
Don't EVER expect me to buy a car with this "feature", for at least as long as any other option exists. In fact don't even expect me to ride in a car when it is not being driven by a human.
I've yet to receive ~any~ answer, much less one approaching "adequate" on how these "self driving" systems are supposed to function safely during extreme inclement weather, such as during a snowstorm.
The follow up question once that one is answered is: So how does the AI Interface install the tire chains?
Biting the hand that feeds IT © 1998–2019