Re: Teleporting trucks
How do you know there is oncoming traffic in the other lane? If you are a person you can see far enough into it to make a decision that the lane is currently empty - how is a computer going to know that?
Fully autonomous cars may never reach public roads, according to the chairman of the US National Transportation Safety Board. Speaking in an interview with MIT Technology Review, Christopher Hart said: “I'm not confident that we will ever reach that point. I don’t see the ideal of complete automation coming any time soon.” …
How do you know there is oncoming traffic in the other lane? If you are a person you can see far enough into it to make a decision that the lane is currently empty - how is a computer going to know that?
If that is the solution then you do realize that you have given up any pretense to privacy and anonymity you currently have when traveling in public? Any such communication system will by definition be totally controlled by the government, and you can place a sure bet that they will surround it with a database that tracks every data point they can think of. You better not be planning on making a quick trip to your mistress - because that database will be available to everyone, including divorce lawyers.
So who's going to pay to install all that "non autonomous vehicles communicating with regard to location, velocity, direction etc." hardware into existing vehicles? At least here in the US we still have cars that are over 20 years old on the road. Is the government going to pay to retrofit that stuff into existing vehicles? I sort of doubt it.
Sorry but "partially concealed driveways" just means that the visibility is too low for the road speed.
A sudden rockfall, a bridge collapsing in front of you M20 style, or a 400kg hay bale bursting through a fence because some idiots wondered if it would roll down the hill, yes that's an unavoidable problem, for human and robot alike, although the latter will always be able to react quicker.
Driveways, however, are not camouflaged. Even if the rare corner is completely blind, there is a speed at which it can be negotiated with near-zero lethality. Remember, metal is only metal --- driving into the back, side or even front of the truck that has mysteriously teleported into your field of view at 20mph is probably not going to kill anyone. In fact about most pedestrians could survive a hit at this speed.
"If you are a person you can see far enough into it to make a decision that the lane is currently empty "
The number of places you can safely swerve is VERY limited: most drivers could only safely perform that manouvre if they were already preparing to safely overtake the vehicle in front.
Swerving to avoid hitting a pedestrian, cyclist or horse is probably acceptable. But you have to remember that most modern cars will keep you safe in a front-on impact at considerable speed. Even if your swerve is not endangering other road users, you may pose a greater risk to yourself by virtue of the fact that you are more likely to lose control of the car.
I swerved a 7.5 tonne horsebox to avoid hitting a lorry that had attempted to cross a main road in front of me and had stalled. This was firstly a defensive driving failure. I had seen the lorry stop at the junction to the road I was travelling down, and I assumed it would stay stopped. Then when I saw it moving I assumed it would safely cross in front of me. When it stalled, I braked as hard as I could given that there were valuable horses on board and was at about 20mph when I would have crashed; I swerved round at low speed, mounting the verge, and we were all safe. A self driving car would never have made this mistake as it would have assumed (as I now do) that a vehicle on the side of the road may pull in front of you at any time. And without horses on board, I could have easily stopped the vehicle in the distance I had. And, even if the distance had been a lot shorter, without horses on board, I wouldn't have swerved, either: I would hae just driven it into the side of the lorry at 20mph.
@John H Woods
You obviously haven't seen those dash cam Youtube clips of cars/trucks careering suddenly into the driver's lane, maybe having been hit by another truck
Yes, like how your phone seamlessly and reliably switches between wifi APs to get the best signal, or like how your computer always has no problem connecting to your printer. Only with much, much tighter timeframes.
"Now the car has to decide whether to run into this truck and kill me, the driver, or to go up on the sidewalk and kill 15 pedestrians"
A nice easy boolean flag for moral decisions, "driver" defineable: SaveDriversLifeHasTopPriority
"Driver" of the "many outweighs that of the few" ethos will set flag to False
"Driver" of the I.m the most important thing in the universe mindset will set flag to True
Code could make "moral decisions" based on flag.
Though, in all seriousness, for something as complex as driving would expect some AI style code to be present, and as such quite hard to often know why AI software "makes a given choice" .
And who sets the flag ?
The douchebag driver who allways thinks he's the most important ?
Or will it be set by the government according to whom the car is sold to ?
That way all those politician's, CEO's, lawyers and other people who belong on the B-Ark will surely have a better chance of survival...
I don't know why we worry about these moral quandaries. No one faces them while driving because of how slow human reaction times are.
The car could conceivably face them, but it should travel at a speed that allows it to always have a safe out. If the safety margin diminishes because it notices pedestrians along the part of the road it calculates as its 'safe out' to swerve to in the event the oncoming human driven truck unexpectedly turns in front of it, it should slow down until it is clear of them. Hopefully this will be compensated for by allowing them to travel as fast as they feel they safely can when no human driven cars are present (a nice 120 mph cruise on the interstates drafting inches behind a long line of other automated cars would make short work of long trips)
"(a nice 120 mph cruise on the interstates drafting inches behind a long line of other automated cars would make short work of long trips)"
Car training has been mooted as a solution to motorway congestion since at least the 1970's. The problems start with needing a cross manufacture standard for the gubbins to make it work and legislation to allow it work on the motorways either by defining a special lane or having vehicles fitted with some sort of indicator for the lead and trailing vehicles to show so that others know what is going on.
A dedicated lane is not likely here in the UK. We are busily converting 3 lane + hard shoulder (emergency lane) motorways in 4 lane "smart" motorways with no hard shoulder/emergency lane due to the amount of traffic they have to cope with already.
The biggest problem with it, IMO, is the car being able to make sure the driver is ready to take over in plenty of time or to be able to autonomously take emergency action in case of accidents, which all comes back to the current Tesla "autopilot" problems and the whole fully autonomous car issue.
"a nice 120 mph cruise on the interstates drafting inches behind a long line of other automated cars would make short work of long trips"
...until the lead car suffers a spontaneous blowout. Moments later, you'll have a massive 20-car pileup on the motorway and probably more than a few fatalities. These sci-fi scenarios never take Murphy into account.
I'll be getting a robot car the minute it can drive me home from the pub after 10 pints of beer. Until then they're useless to me.
My grandfather had a horse & cart that he used to do that on. So, progress, only taken about 100 years to get to the same stage...
Yet people HAVE been prosecuted for being incapable / drunk etc with horse and cart.
Planes have good auto pilots. Automated take off and landing was demonstrated more than 40 years ago I think. Pilots can't fly if they are drunk. A plane is in many ways less difficult than a car.
Prove the technology with trains, then planes, then ships. Then trams. Cars and trucks should be last, not first. Exactly what is the motivation?
"Exactly what is the motivation?"
Cars are killing an estimated 1.3 million people per year worldwide. There's your motivation.
"Prove the technology with trains, then planes, then ships. Then trams. Cars and trucks should be last, not first."
You've got the priorities backwards. Planes, trains and ships are already incredibly safe. You're solving a problem that barely exists if you start there. If you want to save lives, start where you see a huge death-toll.
According to an old family story there was a coal man that had that arrangement in East Ham.
It worked well until one Christmas the lads down the pub decided it was unfair on the horse and clubbed together to it a bucket or two of beer. The following day the bloke found himself in the dock on a charge of "drunk in charge of a drunken horse".
Can you imagine what would have happened if the FAA guy had been around at the time the horse and cart was invented?
Remember the man with the Red Flag that had to walk in front of early cars?
That's his mindset.
Trains have had automatic control for years. The London Docklands Light Railway is driverless. Several other lines are automated but retain a driver.
If full ATC was implemented then we'd could dispense with the Driver bit in my mind we do need a properly qualified member of Train Crew on board. PArt of the current dispute with Southern/GTR is over changing the roles of guards. The Unions won't sit by and let their driving roles be eliminated quietly. As one Union leader has said before, 'you ain't seen nothing yet'.
To modify a pro-gun saying - cars don't kill anyone, it's the driver behind the wheel. That death statistic is made possible because governments world wide will give pretty much any moron a drivers license, and then not really hold them accountable for how well they drive. The studies have shown that the idiots who text while driving are exactly the same as driving drunk when it comes to being a danger - but how many governments have set the penalties for texting and driving to be equivalent to drunk driving? To my knowledge no one. When the governments who are supposed to be monitoring the behavior of licensed drivers won't do their jobs, why should those who actually drive their cars give up their self- control to a computer? A computer that hasn't been proven to be any better than a human driver except for when driven under limited circumstances?
"My grandfather had a horse & cart that he used to do that on."
Back in the C18th a several times ggfather was killed falling from his horse. The same diary that records that also records a clergyman killed falling off his horse when drunk. The horse might be an autonomous transportation unit but it isn't safe.
Why do you think my grandfather used a horse and CART - a lot harder to fall off that!
"... several times ggfather was killed falling from his horse ..."
What finally did him in? A stake through the heart?
Still not as hard as you think. No seat belts, for example, so you can fall over in a drunken stupor. Also, the suspension is usually nonexistent, so one bad rock or pit and you can be thrown off.
When they flat out tell you that when the choice comes between your|your families death or that of a number of pedestrians, you're a dead man [not] driving.
My guess is that in that scenario, most would think "I'd be able to miss them and we'd all be OK" or "Me or them..." would win out and the auto auto would remain at the dealers.
But you can easily modify the Trolley Problem to make it personal. Suppose your automated car loses its brakes on a steep downhill. No other way to divert (downhill so there are guardrails) except into a driveway...where your spouse is standing. Anything else and you crash at the bottom of the hill. So who dies? You or the spouse?
All a self driving car has to do is be reasonably better than half the drivers out there - which, to be quite honest, shouldn't be that hard to do based on the people I see each morning. At that level we'd be FAR safer than letting anyone else drive themselves.
I can pretty much guarantee you that shortly after driverless cars hit the road and they are found to be in fewer accidents than meatbags you'll see groups such as MADD (mothers against drunk driving) teaming up with companies that own a fleet of these things doing everything they can to make it much harder to get a regular drivers license. Eventually they'll do what they can to have laws passed banning meatbags being in control of the vehicles entirely.
Driverless cars are not only coming, but you can be sure that not too long after they appear regular cars will be banned from public roads. The simple reason is that there's too much money at stake for any other outcome.
To add to your point, once autonomous cars exist the usual argument for the defence - if you take my client's licence away he'll lose his job and his family will be on the street - won't be true any more. One can imagine a future in which points on your licence mean you won't be able to engage manual while on the road, and drunk or dangerous driving will entail removal of driving licence altogether.
You can always tell the Porsche drivers with a lot of points because they drive like a Jehovah's Witness in a Nissan Micra. I imagine manual cars will become like private pilots; special license and tracker box. It'll become a status symbol.
"To add to your point, once autonomous cars exist the usual argument for the defence - if you take my client's licence away he'll lose his job and his family will be on the street - won't be true any more."
What if the driver is a trucker?
...according to the chairman of the US National Transportation Safety Board.
"Heavier-than-air flying machines are impossible." -- Lord Kelvin, President, Royal Society, 1895
And he was quite bright, they say.
I was just thinking "we may need almost 8 computers to service the whole world" and "no computer will ever need more than 8kb of RAM".
To the best of my knowledge, there is not one passenger train service in the world at city street level or cross-country that does not still require a driver. All such a system would have to control is the speed of the vehicle and in a much more predicable environment than public roads.
There are driver-less metro trains such as the Docklands Light Railway in London but they are always elevated or in some other way physically secured to avoid the risk of unexpected obstacles like pedestrians, animals and other vehicles. In addition, most driver-less trains are centrally controlled rather than each train being independently autonomous.
If we don't have adaptable & safe enough properly autonomous tech for trains, we sure don't have it for cars, trucks & buses except at low speed in tightly restricted environments such as the shuttle pods at some airport car parks.
That leaves systems that are more akin to lane-keeping, collision avoiding cruise control. And the down side of these is that the driver still has to maintain constant vigilance. It's been shown that refocusing on controlling the car while performing other activity such as reading, phoning etc takes the average human on the order of five seconds. You'd be in an impact before you were able to decide what to do and make a maneuver on any motorway or free-way in Europe or North America. (Except the M25 on a Friday afternoon maybe!)
I'm sure the tech will come one day but I don't see it any time soon.
"If we don't have adaptable & safe enough properly autonomous tech for trains, we sure don't have it for cars, trucks & buses except at low speed in tightly restricted environments such as the shuttle pods at some airport car parks."
Pretty sure the tech is there, but having the tech is not the same thing the political and financial will to deploy it. You may have noticed that TfL staff are a tad... touchy when it comes to progress that might change the way they work, or (more understandably) if they work.
The hypothetical situation described in the article seems a bit of a stretch, and given the plethora of driver assistance devices being installed in modern cars (adaptive cruise control, lane keep, etc) it doesn't seem too far fetched to assume that autonomous vehicles will be better at dealing with danger than human drivers.
However, there is another significant impediment to their widespread adoption: liability. Who is at fault in a collision? The cars themselves will invariably have a great deal of logging data to assess what was done incorrectly, and in most cases this will likely provide evidence that the other (human) driver was to blame, but in the few cases the software proves to be the culprit who will be held responsible? Will it be the owner of the autonomous car or the manufacturer?
Some people just like to drive. Some people don't trust the automation so they're going to want to drive. [And] there’s no software designer in the world that's ever going to be the test subject when they let these loose for real
A typewriter would make a better driver than most human drivers around here, psychos on wheels the lot of 'em!
I'm looking forward to it.
If the car at least ensures that part of my communte i can do some other stuff, i will be happily doing the city driving. The software won't be perfect, accidents will happen, but less than what some idiots cause now.
I predict a premium payable by driving enjoyers to insurance if they drive by hand more then X %
"The software won't be perfect" Very unlikely, I agree
"accidents will happen" A natural consequence of the above
"but less than what some idiots cause now." Evidence?
So lets presume that this really is a binary decision: stay the course and die, or mash 15 pedestrians and live.
Someone-- self driving car, or person-- is going to decide. Making a claim that this sort of decision makes machine control infeasible is preposterous, since given the scenario a decision will be made.
The only question is who is responsible for deciding. Apparently Secretary Hart believes only meatbags should be granted divine authority to decide... or perhaps that a soulless machine shouldn't be entrusted with life and death decisions (which would imply the right honorable Hart never flies, doesn't drive a car with an ECU or airbag, doesn't have a pacemaker, wears a tin foil hat to ward off soulless falling planes that made a decision to crash on his right honorable head instead of crushing the legendary bus full of orphans and nuns, ...).
None of your examples equate to a driverless car. What kind of decision is a pacemaker making? The heartbeat slowed this much so I'll help it. Not much logic required there - and not in any of your other examples. How are you going to code a driverless car to account for that child standing still on the side of the road? The one who could at any instant decide to run across that same road? Will that autonomous car even see that there is a child standing there? What about that ball that rolls across the street - the one a human driver could infer from that there might be a child running behind? What's the current poor excuse for an AI going to decide? Slam on the brakes and risk a rear-ender? Oh, wait. The human driver sees the child has stopped at the side of the road and is safe and doesn't need such drastic action. Let me know when they invent a real AI. Then they can stick one in a car and we'll go from there.
"The only question is who is responsible for deciding. Apparently Secretary Hart believes only meatbags should be granted divine authority to decide"
How does the machine decision get made? Ultimately, by a programmer or someone directing the programmer. So how do you characterise that programmer or other someone? Or maybe the programmer is directed by a committee so that responsibility for any decision, however bad, doesn't actually fall on any particular person. A committee decision - what could possibly go wrong?
Google autonomous cars exist, and currently are tootling around with a human ready to take over when the AI gives up.
The obvious way to compare human vs AI safety is to have the same type of car be driven around in the same areas ONLY under human control and compare the safety record of each.
There is no value in comparing the accident record of perfectly-maintained autonomous vehicles operating on near-empty brightly-lit roads at 25MPH with the accident record of meatbags operating much larger vehicles at higher speeds in reduced visibility/traction.
"Google autonomous cars exist, and currently are tootling around with a human ready to take over when the AI gives up."
Which, when you think about it, tells you a good deal about the confidence currently placed in the ability of the AI. When the situation is reversed we can maybe start thinking that autonomous cars might be a good idea.
So why not have the Google cars take a few runs up and down Donner Pass and back in the winter? Donner Pass isn't too far away and is notoriously difficult during a blizzard.
Given the truck scenario, which driver would make the worse decision: the autonomous car, or the asshat in the BMW M3 who cut me off twice in the space of two miles this morning?
How about this
I don't get it. The objections to car autonomy are always weird edge cases. When on deity's earth do you choose between an artic and 15 pedestrians... AFAIK the vast majority of accidents are lapses of concentration of the driver and/or over-confidence.
Computer cars suffer neither of these. As others have pointed out, the 'puters only need to be better than the average meatbag for it to reduce deaths. If the auto car gets confused, it'll just slow down and maybe stop, and so will all the other auto cars behind it. No more pile-ups.
Oh, and no road rage deaths/injuries. Ever.
"The objections to car autonomy are always weird edge cases."
Accidents, in case you haven't noticed, are not the norm. They are the weird edge cases.
They're also worst-case scenarios. Particularly no-win scenarios (Trolley Problems or Cold Equations) where you simply can't have a Happy Ending. It's a moral quandry so difficult WE haven't developed a universal solution to the problem of "Not everyone can be saved--who dies?" Yet an automated car can conceivably be put into such a problem, which raises even more moral problems. How can we trust to a computer what we can't reliably trust to ourselves?
I suppose we could evolve beyond the need for cars (with "evolve" and "beyond" being suitably elastic in meaning--for example, we could go extinct) before we ever achieve confidence enough in automation to allow a car to do the driving for us.
But I still think it's an especially bold statement.
Biting the hand that feeds IT © 1998–2018