back to article Robot cars probably won't happen, sniffs US transport chief

Fully autonomous cars may never reach public roads, according to the chairman of the US National Transportation Safety Board. Speaking in an interview with MIT Technology Review, Christopher Hart said: “I'm not confident that we will ever reach that point. I don’t see the ideal of complete automation coming any time soon.” …

I'm not so pessimistic

I'm generally a pretty cynical person but I reckon once autonomous cars are shown to be substantially safer than a human driver (which won't be long, if it hasn't happened already), insurance companies will be happy to insure them and governments will come under increasing pressure to allow them.

18
6
Silver badge
Windows

Re: I'm not so pessimistic

I'm one of the people who enjoys driving (I don't live in a city), so not planning to be an early adopter.

Anyone seen any stats on people who would want a driverless car? I mean other than Uber etc.

9
1

Re: I'm not so pessimistic

@AMBxx - that's a really interesting point. I don't specifically enjoy driving, but like most people I think I'm a better driver than average. So a computer would have to be a lot better than average to convince me it was better than me :-)

5
0
Silver badge

Re: I'm not so pessimistic

So do trains first.

" I reckon once autonomous cars are shown to be substantially safer than a human driver"

I'd say "IF" rather than "once"

I've been programming since before PCs and I'm sceptical that they will ever be robust. They rely too much on Lidar (easily jammed) and databases as well as human programming.

Also how will the safety testing be achieved?

6
1
Anonymous Coward

Re: I'm not so pessimistic

They are plenty of autonomous cars on the road in these parts.

They all have Audi on the front - at least, all the drivers seem to be asleep at the wheel so I assume they are autonomous ... or at least, the drivers think they are autonomous.

14
0

Re: I'm not so pessimistic

"Anyone seen any stats on people who would want a driverless car?"

Like a lot of big innovations, the real market doesn't appear until the technology is available; remember the IBM guy who thought the world market for computers was around five units?

For me the most obvious market is older people who want to maintain their independence and have the money to afford an autonomous vehicle. These people are being hit by increased insurance costs as they get older, and the question for insurers becomes "at what point is the computer safer than the average 75 year-old driver?"

10
0
Anonymous Coward

Re: I'm not so pessimistic

Doesn't have to convince you. It just has to convince the government and insurance companies.

3
0
Silver badge

Re: I'm not so pessimistic

Well, as someone who lives alone, is just disabled enough that he can't get a driver's license, and lives in an area where the buses stop running about 8:30 PM and don't run at all on Sundays. I would probably be a prime candidate for a small autonomous "pod"-type car just large enough for me and a few sacks of groceries. It WOULD require a controller that could "learn" how to get into my driveway, say (touchpad display to draw a path one time from the street to where I want it to park once it reaches the end of its GPSed street routing...?) or learn the layout of various parking lots, but I don't see any of that as insurmountable.

9
0
Go

Re: I'm not so pessimistic

@AMBxx

Do petrolheads not want this at all? I would have thought the option would be nice for humdrum travels, a weekend in the country with the robot eating up the motorway miles to get there and then taking the reins yourself to experience the B-roads seems like the best of both worlds.

As for who would want one... me! Passed my test at 17 and driven less than 100mi since. While I can drive I find it stressful and a chore (probably due to a lack of experience). A trip to the arse-end of nowhere requires me to fortify myself and do it, but I'm not looking forward to it. As a hill-walker it would be nice to tell a robo-vehicle...

"Trail starts here, drop me off, refuel, then meet me on the other side of the ridge in three hours time"

*bleep, bloop!*

"Good car"

8
1
Silver badge

Re: I'm not so pessimistic

Anyone seen any stats on people who would want a driverless car? I mean other than Uber etc.

Not saying I'd use automated on every trip, but sure - stumble out of the pub at 11pm and hop in to my own waiting car that ferries me to my house via the takeaway? Driving to go on holiday, 8 hour kip on the back seat while Johnny Cab delivers me to the Alps overnight? Yes please, where do I sign?

8
0

It;'s not pessimism, it's informed consideration

In common with all these confident predictions of the inevitability of autonomous vehicles, thinking it's just a question of safety fails to fully consider the ramifications and changes involved in the technology.

For example, you say that once the increased safety is established, the insurance companies will be "happy to insure them".

Insure **who** ?

At the moment, "car insurance" is actually "driver insurance". Some drivers are clearly safer than others. If you have an Advanced Drivers Test under your belt you can sometimes get a premium discount, and of course No Claims Discount also supposedly reflects a demonstrable "safety" record (actually, insurance liability record which isn't necessarily the same thing).

But in an autonomous vehicle with no driver at the controls, who exactly is the insurance company "happy to insure" ?

It cannot be you. As merely a passenger you are not a factor in any liability any more than your passengers are currently when you are driving under the cover of your insurance (unless it can be established that the passenger was actively interfering with your control of the vehicle).

Is it the specific installation of the car control and management software in your specific vehicle ? Good luck with that. Since that specific installation is identical to every other installation of that same control and management software you are looking at a ready-made class action pointing to the car manufacturer being liable (and/or the company that developed the software, if it wasn't the car manufacturer themselves).

But *is* the manufacturer of the car liable ? They only built the thing, they didn't sell it to you. That was the dealer, actively marketing and selling a machine where the control systems that determine it's danger to the public are an intrinsic part of the product. In contrast, currently, they can sell a car to any meatbag they like, but the law then determines who is legally permitted to operate that machine on the road, via driver licensing etc.

Or is it you, having chosen to purchase such a vehicle and abdicate control - does that very decision render you solely liable for the consequences of that decision ? Any lawyer worth their salt will easily have any such claim dismissed (you only allowed to abdicate control because it was established sufficiently that this was the "safer" decision and had you known that it was not you would not have agreed to abdicate that control - i.e. the responsibility falls back again on the technology or the industry).

The insurance companies would like to keep things pointing at the occupant of the vehicle, because if it falls back on the dealers or the manufacturers then at a stroke the entire market for driver insurance disappears and is subsumed into the public liability cover of those businesses.

I'm pretty sure the insurance companies would NOT be happy about that.

It is a much, MUCH more complex problem than simply establishing that driverless cars are "safer" than meatbag controlled cars. Those self same meatbags are what make identifying liability relatively simple and THAT is the real challenge of these things, not the technology.

8
0
Silver badge

Re: I'm not so pessimistic

Insurance companies may actually prefer hordes of bad drivers. It's not like they provided free service (and credit score alone is perfect way to jack up prices beyond levels justified by driver's record).

3
0

Re: I'm not so pessimistic

"They all have Audi on the front"

Having been tail-gated by one on the motorway this morning in horrific rain, I find myself having to agree with you.... And when I moved out of the way, he shot up to tailgate the guy who was in front of me.

1
0

Re: I'm not so pessimistic

As long as they are better than my wife I'd be up for it. I risk my life with my wife at the wheel every day. I don't if she is good, bad or indifferent driver but that's not the point. Whether any one individual is a good driver or not, we put our lives in the hands of others who may be less than ideal drivers every day.

The potential to do something useful while in a car will be welcome. Some like driving, a good for them. I drive to get some place, not to enjoy the journey. Being able to use that journey time more productively is something to rejoice for me.

0
1
Silver badge

Re: I'm not so pessimistic

"Whether any one individual is a good driver or not, we put our lives in the hands of others who may be less than ideal drivers every day."

Interestingly enough, there are a whole bunch of other circumstances where people "put their lives in the hands of others" with a lot fewer qualms - such as going to a doctor, who might easily kill you without anyone actually realising he fucked up (at any rate, he has to demonstrate astonishing ineptitude to get blamed for anything - otherwise it's just "natural causes" and "complications"; no doctor ever got in trouble for not really giving much of a fuck about what actually happens to you...).

At the very least, robo-cars might exhibit wide awareness and caution, but not actual intelligence or a self-preservation instinct any time soon, which is something most but the stupidest drivers definitely do demonstrate some level of. I do believe that much like with road accidents vs. plane crashes, it won't matter whether robo-cars turn out to be safer than human drivers (make no mistake, all cars could be self-driving and we'd still have fatal accidents daily, even if not nearly as many) - most people will still fear having to trust a black box more than taking their chances driving themselves or letting a trusted person drive (if you're willing to ride with a person you don't trust well... good luck to you and congrats for the Darwin award).

0
0
Silver badge

Re: I'm not so pessimistic

once there are autonomous vehicles there would be no need to own one- Just set up a continuously operating fleet of vehicles and you just call one (think Hailo/Uber) when ever you need to use it.

0
0

Re: I'm not so pessimistic

The fully antonymous case will have a "manual mode", with a stupid tiny joystick and a 10kmh speed limit. Its an obvious solution. Personally I'm perfectly fit and I would easily pay an extra 3Keuro even for just an "autonomous mode". However the "not quite autonomous" mode of the Tesla autopilot scares me. I want the one that will safely stop and beep at me to resolve the problem.

1
0

Re: I'm not so pessimistic

I've been programming since before PCs and I'm sceptical that they will ever be robust.

On the basis of several decades as a driver, I'm not sure human drivers will ever be robust.

They rely too much on Lidar (easily jammed) and databases as well as human programming.

Hmm... human drivers rely entirely on perception/reaction mechanisms evolved to deal with much lower speeds and longer time scales - and also comparatively far less serious consequences in case of failure of said mechanisms.

Also how will the safety testing be achieved??

A driving test?

1
0
Childcatcher

Re: I'm not so pessimistic

Money-money, Money Money [/song]

0
0
Silver badge

I'm not sure I understand

What he seems to be saying is "Show us it's thousands of times safer than a human at the wheel, or we won't allow it on the road". This is akin to saying "We don't need airbags, there have been a billion crashes with them, 1 million lives saved, but one person died from injuries caused by the airbag. Ban them, they are dangerous!"

I still think there's a way to go, but I expect automated cars to be tens of times safer than meatbag controlled ones at first. That should still mean 90% of fatalities gone... Surely that's worth it the occasional screw up!

We shouldn't expect perfection. We should expect them to be safer than the dickheads on the roads right now, but 2 crashes and a single fatality from a badly-named smart cruise control system in a large number of miles driven has everyone in a panic.

28
4

Re: I'm not sure I understand

Agreed. In the example is he saying that it's OK for humans to decide to mow down 15 people but it's not OK for a computer to decide it's better that only 1 person dies?

15
2
Silver badge

Re: I'm not sure I understand

It goes beyone that. The ethics of who to kill in an unavoidable accident has barely been discussed.

Child runs in front of car. Should car kill child or swerve and kill an 80 year old? If swerve, how many 80 year olds are equivalent to one child?

What about the disabled?

This is all stuff that human drivers can't process quickly enough. Computers can, so we need to make some decisions.

Remember the HP webcam that followed you around? Unless you were black when it would follow any white person instead! What's going to happen when we find that a driverless car is more likely to kill certain races? It's not being racist, just can't distinguish some skin colours so well.

2
2

Re: I'm not sure I understand

I am inferring he is implying that while courts may excuse a driver trying to save their own life by making a snap decision that kills others, higher standards can be applied to software engineers and lawyers making considered decisions in a comfortable office.

I predict that self-driving cars will prioritise the safety of their occupants - they are the ones explicitly or implicitly volunteering to trust their safety to the vehicle, so would probably deserve a higher standard of care. Pedestrians already accept that they run a (small) risk of dying because of drunk/crazy/unconscious drivers.

There will of course be many incidents that enrich the lawyers - probably a price worth paying for the overall reduction in road deaths.

9
1

Re: I'm not sure I understand

"I predict that self-driving cars will prioritise the safety of their occupants"

I predict the opposite, on the basis that the occupants have accepted the five-hundred-page EULA and it will be easy to sneak in a clause that in the event of an unavoidable accident they are willing to sacrifice themselves for the good of Google/Microsoft/Uber/Ford/GM etc.

13
0

Re: I'm not sure I understand

I think I'd want a car programmed with self-preservation.

A car that considers itself and its occupants expendable isn't much of a selling point.

9
0

Re: I'm not sure I understand

I would hope the computer is not dumb enough to get into that situation in the first place.

I would also like to point out that if a human gets into that position, the first reaction is almost always self preservation

3
1

Re: I'm not sure I understand

"I think I'd want a car programmed with self-preservation."

Sounds good - until you try to scrap it...

12
0
Silver badge
IT Angle

Re: I'm not sure I understand

People are also ignoring the IT security risk of a society of 1 million autonomous vehicles. 1 million cars driven by people will probably kill a few thousand humans a year. 1 million functional autonomous cars should kill a smaller number. EXCEPT that if somebody finds a way to brick the 1 million autonomous cars--and you know lots of people will try. Then you would see disruptions in emergency services, food delivery, even long-term economic damage/reduction in the tax base causing a lot more deaths short and long-term than the million human drivers could.

So a million drivers will be sloppy, but society won't grind (literally) to a halt.

3
1

Re: I'm not sure I understand

Unless you personally are the "occasional screw up' - then I imagine the decisions would be a little different. Everyone is always "it's best for the majority", but when it's your personal ass on the line it always gets a little different.

0
0

Re: I'm not sure I understand

The computer can only make decisions it is pre-programmed to make. The human could decide that swerving into the oncoming lane (currently empty) and going into the median is better. If we can come up with real artificial intelligence then completely self-driving cars could be a reality - but as long as we are just using normal old computers that have to have all decisions pre-programmed then I don't think they are viable. Are these cars going to be visually scanning the side of the roadways for something like an animal or child? Something that could be a problem in a few seconds and that need to be evaluated as needing a possible future action? I doubt it. Granted, most of the drivers on the road are so lousy at actually driving that they don't do that either - but you do have the ones who actually actively drive their cars and do look for situations like that.

3
0

Re: I'm not sure I understand

So you're saying the computer can distinguish between a child and an 80 YO better than a person? On what basis? Size? Then what does it do when a child and an older person are the same size? And what are you going to program the computer to do? If it's mow down the 80 YO instead of the child I think there might be some old people out there that would want to take issue with your decision. And whose going to make these decisions? The software companies, the owner, the government?

4
0

Re: I'm not sure I understand @AMBxx

The problem with all those scenarios is this: Assuming they would even happen.

Take the example from the article. The likely reason for a situation in which a car is about to plow into a vehicle ahead of it is driver inattention or following too closely for the conditions, speed, etc. A computer doesn't take a "quick sec" to gaze at the phone nestled against their crotch. It doesn't have a BAC of .04 that slows reaction time to require a greater following distance, or causes someone to do stupid shit like tailgate.

Also in the real world, cars going 35 mph stop pretty damn quick once the brake is applied, which is the only scenario in which there is likely to be a group of kids on the side of the road. Once again, the limitation is the meat sack in the driver's seat who was too busy digging for the last fry in the McDonald's bag.

"Unavoidable accident" is just a phrase people use to reduce their liability in court or make their conscience shut up. Kids don't materialize in the road; they came from a yard or park 10 seconds earlier that an attentive driver would have seen and made the appropriate behavior modifications when approaching, like slowing down. The same kind of down-the-street evaluation can be done by a computer, and might even tag squirrels and bunnies if the resolution is good enough.

And as far as mowing down people of the wrong skin color, well, I don't know where you drive, but it's pretty rare for anyone to be in the middle of a lane where I am. With a tiny sample size, it's easy to get skewed numbers.

5
2

Re: I'm not sure I understand

No, it's that if a human driving a vehicle mows down 15 people then that human will find themselves in a court where a jury of their peers will examine the specific circumstances and capabilities of that human, taking all factors into account and reaching a decision as to whether the action was justified - or at least excusable. And if not, then that human has consequences to face. Otherwise, the family (or families) of any victims at least may be satisfied that justice has been applied (it is not uncommon for families in such cases to feel compassion and sympathy along with their grief).

But if a vehicle control system makes that decision then it is a simple question of whether the vehicle followed it's programming or there was a defect in that programming.

If the program is shown to have a defect then the manufacturer is liable not just to the families involved but will likely face instant bankruptcy as their product becomes poison. Or at least face a massive recall exercise.

If the vehicles is demonstrated to not have a "defect" in the programming. That is, that the program specification was followed precisely, then in any event, that programming is responsible for having chosen the deaths of 15 people over the 1 life of the passenger (or, the death of the 1 person over the 15). The argument then will be that the decision tree formulated years in advance and in splendid isolation from the circumstances on the day in question, was not sufficiently adaptable to those circumstances and was thus inherently and dangerously flawed.

So even if there was no defect, the program was defective.

There will be either the families of the 15 people or the 1 person lining up to claim massive damages as a result of the decision or error that concluded that the life of their loved one was the one - on balance - worth sacrificing.

Aha - comes the cry from the permanently not-pessimistic - but what if the program can be demonstrated not to have performed any such "weighing of the balance" at all !!?! Eh? Ha! Then the program can't be blamed for making a decision that it didn't actually take.

OK - so there was no "decision" to mow down 15 people, they were simply not a factor in the vehicles action to save the occupant. In which case the open and shut argument is simply that such a system is not safe to permit on the roads where such decisions are necessarily required.

It's not that either outcome is "OK".

It's that the legal questions arising from the one where a vehicle is "responsible" are just too complex and intractable and once this is realised, the car companies will quickly back pedal from the idea, except as a development vehicle [sic] for technologies to provide driver assistance (as opposed to replacing the driver entirely).

0
1

Re: I'm not sure I understand

I'd definitely want a car programmed with self-preservation. However I'd want everyone else's car to be programmed to minimise the number of lives lost. It's possible a compromise may have to be made.

1
0
Silver badge
Devil

Re: I'm not sure I understand

" Everyone is always "it's best for the majority", but when it's your personal ass on the line it always gets a little different."

Which is why I much prefer selfish assholes like me who simply laugh at such fluff - none of us are actually better, but we're at least honest about it...

0
0
Silver badge

Re: I'm not sure I understand @AMBxx

"Unavoidable accident" is just a phrase

Hahahahaha.... wait, you're serious! Hold on, let me laugh harder: HAHAHAHAHAHA....

...I take it you never heard of pedestrian idiots who take sharp 90 degree turns into a crossing never slowing down to check whether you will / can avoid them, having never exhibited any intent to cross beforehand? Or just people suddenly emerging from between cars where they were equally undetectable to LIDARs and human eyes before...? Or ever heard of things like black ice...?

People who think they cannot possibly ever get in an accident simply because they're "cautious" are just as big of an idiot as those who think they cannot possibly ever get in an accident because "they can handle anything".

0
1

Re: Cars with self-preservation

Didn't you ever see that episode of Knight Rider with KITT's Evil Twin? That was the result of programming the car with self-preservation.

0
0

Re: I'm not sure I understand @AMBxx

Absolutely avoidable. And in court, the phrase unavoidable accident would have been used by the pedestrian or their next of kin as a way to shift liability to the driver.

Black ice is created under specific circumstances that can easily be discerned by checking the weather report or a couple of weather sensors. Knowing that, you slow down, increase following distance, and be well-versed in steering into the skid. Of course it might not always work, but you can decrease the chances of being caught out and crashing if you are prepared.

And yes, even the most attentive driver is going to lapse or otherwise take the wrong moment to check their mirrors and find bad things coming at speed when they get back to the road. But it's pretty telling when most insurance statistics show there are repeat offenders, be it due to excessive speeding, repeated instances of inattention, or just bad at driving. Most insurers (in the US) don't even ding you for the first accident anymore if it's been a long time since your last one. And since revoking a license or being uninsurable doesn't stop people from driving, it's safe to say that the best solution is to remove the mouth-breathing meat bag from behind the wheel. Self-driving cars are one way to do this.

0
0

Re: I'm not sure I understand @RealityisntReal

The computer can only make decisions it is pre-programmed to make. The human could decide that swerving into the oncoming lane (currently empty) and going into the median is better. If we can come up with real artificial intelligence then completely self-driving cars could be a reality - but as long as we are just using normal old computers that have to have all decisions pre-programmed then I don't think they are viable.

You seem to be arguing from the viewpoint that autonomous driving software needs to have every detailed contingency it may encounter explicitly hard-coded. This is patently absurd and a million miles away from how such systems are in fact programmed. I have a mate who has worked in the games industry for many years, who was recently recruited by an autonomous vehicle company. His speciality was developing AI for realistic interactions between on-screen agents. I think this tells you something about how driverless vehicle software is being developed.

And, btw, what is "real" artificial intelligence? It's artificial, innit? (I suspect you mean "like human intelligence", and no, I don't see that coming anytime soon.)

0
0
Silver badge

Dead wrong

I think Hart is wrong about this. Partly it's because he's bringing a public aviation mindset to the debate, which will actually mislead you, because there are key differences between aviation and automobiles. Fundamentally, we need 1-in-a-billion type reliability for planes because when things go wrong, there's a long way to fall and many people to die. With cars, an engine failure just means you coast to a stop, not that you risk killing 400 people. Even a series of cockups that might doom a plane may not kill anyone in a car accident.

So although it's right to say that safety is paramount, we're not actually talking about working to the standards of civil aviation.

And so far, all the (preliminary) evidence is that autopiloted cars make fewer mistakes and have fewer accidents than human-driven ones. Given what you see on roads infested by morons, that's hardly a surprise, and the disparity is only going to get greater. It'll be hard for any government to say "Robot cars will save 10,000 lives a year, but we won't legislate for them".

Yes, people enjoy driving. I do, myself. But what I enjoy doesn't trump the lives of thousands of others.

12
7
Silver badge

Re: Dead wrong

Even a series of cockups that might doom a plane may not kill anyone in a car accident.

In the case of engine failure usually not. But the key question is how good self driving cars are at situational awareness, hazard recognition, and how good their choices are. With aircraft you usually find that good design, good operators, and multiple redundancy mean you need a chain of events to cause a fatal accident. With a car, a single erroneous judgement can be enough to cause a death, automated or not.

Automation on the ground is much easier if you have constraints on movements (like rails). My guess is that the future of automated cars lies more in stopping the driver doing certain things, rather than doing everything for him. If you can create urban trackways (Minority Report style) withour pedestrians, with common speeds et al, then you could automate that fairly easily, but I don't see that happening anytime soon for simple reasons of cost.

3
0
Silver badge

Re: Dead wrong

"If you can create urban trackways (Minority Report style) withour pedestrians, with common speeds et al, then you could automate that fairly easily, but I don't see that happening anytime soon for simple reasons of cost."

Yes, that's what I was thinking too. There's too much traffic and just too many variables to expect a current tech JohnnyCab to work safely or reliably. It would require a complete change from manual to auto. Sweden switched which side of the road they drive on from Left to Right many years ago. With the population increase and massive car ownership increase, not to mention many more buses and lorries, would they consider an overnight switch now? I suspect not.

It may be that in certain cities or very large towns, there may be a case to ban traffic from the city/town centre and have JohnnyCabs available. Most city/town centres have pedestrian only zones in the main shopping areas these days. Expanding them and allowing only JohnnyCabs in those areas. Maybe even surround those areas with another zone with a 15 or 20mph speed limit where automaric and manual traffic can mix. We have guided bus lanes too, so that's another option to allow JohnnyCabs a larger roaming range, possbly even between otherwise separate pedestrian/JohnnyCab areas.

Note I keep using the term JohnnyCab. I think we are still many, many years away from privately owned autonomous car trips to the Alps. I suspect if we do eventually move to autonomous cars on any significant scale, only the wealthy or essential users will have or own their own cars. (essential may include rural users)

2
0
Anonymous Coward

"I can give you an example I've seen mentioned in several places. My automated car is confronted by an 80,000 pound truck in my lane. Now the car has to decide whether to run into this truck and kill me, the driver, or to go up on the sidewalk and kill 15 pedestrians. That would [have to] be put into the system,” Hart said.

That's a false dichotomy.

The correct answer is that an AI (or any human driver worthy of keeping her licence) faced with driving in a crowded urban environment will reduce the speed so that the car can come to a stop safely within the available space if a previously invisible, but reasonably foreseeable obstacle appears ahead.

19
2
Silver badge

The question as stated always seems to assume that those are the ONLY two options -- hit the truck or hit pedestrians. In the real world, there are trees and lampposts bordering sidewalks that one might carom off to avoid both truck and pedestrians, there is the lane on the opposite side of the street that the truck SHOULD have been in, etc., etc. Reducing the argument to a binary problem, then claiming that binary thinking won't solve it seems rather simplistic to me.

12
0
Silver badge

I'd buy one

I wouldn't be an early adopter as I'd want to see some data first, but while I enjoy driving sometimes the times when I don't easily outweigh the times that I do so I'd make the trade. Being able to do something else while going from A to B would be nice.

However, I think Tim 11 is being wildly optimistic if he thinks there's even a 0.000000001% chance that driverless cars are ALREADY safer than human drivers. He's really buying into the Google propaganda machine, that's for sure! They probably are safer than humans, in the limited contexts in which Google drives. The roads in California are way better than in much of the US, let alone the rest of the world. Let's see how it does on roads where the lane markings are gone, on gravel or dirt roads, on poorly marked construction areas (I've driven in the wrong place and had to back up before, if it confuses me no way software will get that right every time) How about on roads with several inches of fresh snow, no curbs and only the outlines of the ditches on either side and the occasional sign to give you a clue where it is?

They will come in stages, and first be approved on interstates (freeways) in the US. Those are well controlled with defined entrances and exits, generally well maintained, have reflective markers on the sides of the road, and traffic is all moving in the same direction. It will prove itself there and progressively be approved in different areas. It will be at least a decade from the interstate approval before they get general nationwide approval to drive anywhere a human driver is allowed to drive.

3
3
Silver badge

Teleporting trucks

"My automated car is confronted by an 80,000 pound truck in my lane"

Trucks do not just appear out of thin air. The (single) rule is:

Always drive in a manner (allowing for the condition of the road, the vehicle and the driver) which allows you to stop the vehicle on your own side of the road in the distance you can see to be clear.

What would you do if you were a human in such a situation? You'd brake hard and hit the lorry as gently as you could. If you think that swerving, either into pedestrians, or into oncoming traffic, is even an option, I hope you won't be programming any car systems!

10
2
Silver badge

Re: Teleporting trucks

Trucks do not just appear out of thin air.

No, but they do back out of partially concealed driveways and alleys at unexpected times. Which can be functionally the same as appearing out of thin air.

4
1
JC_

Re: Teleporting trucks

That automated truck will be able to communicate with the automated vehicles around it, to let them know what it's planning on doing and take away the surprise.

1
1
Silver badge

Re: Teleporting trucks

Autonomous vehicles will have to coexist on the roads with non-autonomous vehicles for many years. I doubt many Reg readers will live to see the day when human driven vehicles are banned from all public roads.

3
0
Anonymous Coward

Re: Teleporting trucks

Yep. Making everything autonomous would help to solve this, or at least have the meat controlled vehicles fitted with location beacons and have the autonomous and non autonomous vehicles communicating with regard to location, velocity, direction etc. That would remove or at least reduce the risk of vehicles 'suddently' appearing when not expected.

0
0

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Forums

Biting the hand that feeds IT © 1998–2018