back to article Two driverless cars stuffed with passengers are ABOUT TO CRASH - who should take the hit?

Data analysts don’t need philosophers to help them with ethical decisions - the “science” can figure that out, a top boffin said this week. Asked about the ethics of big data, the head of Imperial College’s new Big Data dept said: “We treat [ethical issues] not philosophically, but scientifically”. It’s a startling assertion …

  1. Anonymous Coward
    Joke

    "ethicists should be involved in such decisions"...

    Really ?

    Two cars... About to collide... You know what ?

    Let's call an ethicist... he'll know what to do !

    1. swschrad

      Ethernet already addressed that.

      conflict? "shut down the road, start timers, and one gets delayed."

      don't have room in the ROM? engage brakes, disable engine, "recalculating..."

      this is not rocket science.

      1. Anonymous Coward
        Anonymous Coward

        Re: Ethernet already addressed that.

        You do realize what the "CD" CSMA/CD stands for, right?

        I think you missed the point of the exercise, and the meaning of the word "inevitable". There is no "the collision should be avoided in all circumstances" answer because it is impossible for all collisions of autonomously driven vehicles be avoided, unless you restrict their speeds far below the speeds we travel at today.

        One car may slip on a patch of ice, a blowout or other malfunction could cause a car in the opposite lane to swerve over in a split second with no possibility for the other car to avoid the impact save deliberately crashing itself, possibly deliberately crashing into a car in the next lane over.

  2. Marcelo Rodrigues

    Ethics vary from one culture to another

    But, I believe, we are reasonably safe taking the "least harm" road.

    Take the two before mentioned cars: what should be done?

    The decision should be the one that would do the minimum possible amount of harm. I believe we shouldn't base this on gender, age, job... a person is a person.

    1) Count the number of persons inside each car.

    2) Calculate the action that would pose less risk to the higher number of humans.

    3) Execute.

    It could be one car would sacrifice himself. Or a head on collision could be better - it depends upon speed, road conditions and car model.

    The more I think, the more I like Assimov 3 laws. Easy, consistent and quite straightforward. Start factoring gender, age and whatnot... No one would ever reach any conclusion - and all of them would be wrong to someone.

    1. LucreLout
      Childcatcher

      Re: Ethics vary from one culture to another

      I believe we shouldn't base this on gender, age, job... a person is a person.

      I'm sure you knew someone would disagree with this, so, allow me....

      An 80 year old can't be considered worth the same in human terms as a child or 20 something. One has had most of their life, the other barely started. The minimum possible amount of harm would have to reflect the number of years of life lost as well as the number of people.

      It could be one car would sacrifice himself.

      Mine won't be making that choice. If/when these hit the road, I'll be reprogramming mine to maximise my childs survival chances, and I won't be alone in this. If the other guys car decides to go off the cliff, so be it, but I'd never allow something I own (or can exert control over) to wipe out my family because it suited someone elses ethical view. I think it would be such a common position, in fact, that both vehicles would be programmed from the factory to have the accident - as would happen with humans driving anyway.

      1. Jonathan Richards 1
        Stop

        Re: Ethics vary from one culture to another

        Ditto: I'm sure you knew someone would disagree with this, so, allow me also....

        > An 80 year old can't be considered worth the same in human terms as a child or 20 something.

        Agreed. A perfectly valid metric for our Robot Car Programming Overlord to employ would be the immediate economic value of the vehicle contents. On this scale a child and an 80 year old would weigh less than an employed adult. And then he's got to factor in the likely survival rates: front seat passengers more likely to die than back seat? Of course, badly injured survivors are more economically draining than fatalities (Bouncing Betty refers), so a particularly calculating RCPO might put in a branch where the car chooses to drive overcliff: maximising the greater good, don'ch'a know.

        I really don't see how any of this could be ethically beta tested. The very idea that the RCPO would be held negligent at law in the event of injuries or fatalities that offend the human sense of fairness will stop any such thing being deployed in the near future.

        I would like to point out that we already have a transport system in which excursions off the route—and collisions—are very rare: railways.

      2. proud2bgrumpy

        Re: Ethics vary from one culture to another

        What about a car with a single lonely 80 year old heading towards a stolen car of 4x 20 year olds.

        What about a car full of 80 year olds heading towards a car with a 20 year who has a terminal illness.

        Honestly, the list of variables is almost endless, and so the time taken to process an infinite list will simply take too long to be of any use and anything less than a very, very long list will be pointless.

        Unless we go back to a practical view of Car A is a 4x4, car B is a CityCar, so in an inevitable head on crash, the CityCar would lose, so it might as well just self-detonate to save damage to the 4x4.

        There you go - problem solved, no progress made ;-)

    2. Martin-73 Silver badge

      Re: Ethics vary from one culture to another

      You are of course aware that the story (I Robot) in which the 3 laws of robotics were introduced, largely consisted of a group of stories explaining how even such simple rules could have unforeseen, surreal, and negative consequences?

      I highly recommend this story by the way

    3. Anonymous Coward
      Joke

      @Marcelo

      1) Count the number of persons inside each car.

      2) Calculate the action that would pose less risk to the higher number of humans.

      3) Execute.

      If you're ever in a position to propose this solution for real, I'd recommend a different choice of words for step 3.

    4. Anonymous Coward
      Anonymous Coward

      Re: Ethics vary from one culture to another

      "3) Execute."

      Umm... possibly not the *best* term to have used.

  3. DavCrav

    I think you might find it difficult to convince people to get into self-driving cars if they contain a bit of their programming that tells the car to drive off a cliff to avoid an accident that you could well survive.

  4. Simon Williams 2

    I don't buy the premise

    Why would they be crashing? Any self respecting driverless car system would include brakes and collision detection. Worst case scenario, assuming a single track lane is two stationary cars and no ability to pass each other. A better question in that instance would be do you just all swap cars and carry on your journey?

    1. Goldmember

      Re: I don't buy the premise

      Yes, but collision detection systems can't currently see around corners. Imagine a single track mountain road with a 50 MPH speed limit, with steep hills and tight bends. No car has right of way, so there's a plausible scenario where collision detection systems wouldn't have enough time to stop the cars fully before impact, thereby forcing them to decide on either evasive action or simply allowing the accident to happen.

      GPS could help with this of course, but as well as the need for military-grade GPS tech being fitted to each and every car, you'd have to have a global standard with the agreement of all vehicle manufacturers sending customer tracking data to a central source for it to work.

      1. John Miles

        Re: collision detection systems can't currently see around corners

        It is easy - car slows down to a speed where it knows it can stop within half the distance it can see, unlike human drivers the computer won't get impatient and start taking risks if that is 5mph.

        It could easily be allowed to see around the bend, it just needs things like sensors and cameras placed strategically along the the road transmitting data about conditions further along and other cars similarly passing data along.

        1. LucreLout

          Re: collision detection systems can't currently see around corners

          It is easy - car slows down to a speed where it knows it can stop within half the distance it can see, unlike human drivers the computer won't get impatient and start taking risks if that is 5mph.

          Yes, the Golden Rule. So simple, so guaranteed, and yet so many people seem utterly incapable of following it.

          The sooner we have driverless cars, the sooner we can make the driving test properly hard. Can't pass? Ok, buy yourself a JohnnyCab or take the bus.

          1. Lusty

            Re: collision detection systems can't currently see around corners

            Agreed, the fact that so many people believe that there are unavoidable collisions just underlines how urgent it is to get the humans out of the driving seat!

        2. Graham Marsden

          Re: collision detection systems can't currently see around corners

          > speed where it knows it can stop within half the distance it can see

          Exactly.

          All(? Many? Most?) of these "Ethical Dilemmas" come down to people who don't a) understand safe driving principles or b) understand how such principles would be programed into self-driving cars or c) both.

          Perhaps they should be required to sign up to their local RoSPA/ IAM/ Bikesafe/ equivalent so they can actually *learn* about sensible road use before commenting.

          There again, I think the same should go for most drivers and a lot of bikers too...

        3. Goldmember

          Re: collision detection systems can't currently see around corners

          " it just needs things like sensors and cameras placed strategically along the the road transmitting data about conditions further along and other cars similarly passing data along."

          What, along every single track road? Do you have any idea of the cost of such an operation? There are miles and miles of single track, national speed limit roads in the Highlands of Scotland alone. Roads which don't see a lot of traffic, but have the potential to cause very serious accidents. They would all need a network of cameras/ sensors, which would need fitting and maintaining, and would need power etc. Then there's the proprietary standards system that all car makers would need to interface with.

          I'm not saying it isn't possible or worth doing, but it's certainly not "easy", and I seriously doubt it'll be in place by the time driverless cars are let loose on such public highways.

          1. John Miles

            Re: What, along every single track road?

            To start with only those where the speed vehicles need to drop navigating a bend would cause major inconvenience/tailback - which there shouldn't be that many if humans can currently drive them safely.

      2. jonathanb Silver badge

        Re: I don't buy the premise

        If you can't see round the corner, you shouldn't be driving at 50 mph. You should know what your stopping distance is, and be able to see that far ahead at all times. Besides, how many cars are capable of doing a hairpin bend at 50 mph without spinning off the road anyway?

      3. Vic

        Re: I don't buy the premise

        Yes, but collision detection systems can't currently see around corners.

        The Roadcraft Rule : "Always make sure you can stop on your own side of the road within the distance you can see to be clear".

        Vic.

    2. Someone Else Silver badge
      WTF?

      What I want to know is...

      ...what damfool, braindead, script-kiddie program allowed both these vehicles onto a single-lane mountain road going in opposite directions at the same time?

      Perhaps the "ethical" thing to do would be to "sacrifice" the programmer, and the CEO of the corporation that hired him/her.

      1. Goldmember

        Re: What I want to know is...

        "...what damfool, braindead, script-kiddie program allowed both these vehicles onto a single-lane mountain road going in opposite directions at the same time?"

        That's a stupid argument. Is every driverless car supposed to know the whereabouts of every other driverless car in the world, at all times? If that was true then yes, they could avoid driving down a single track road if they knew there were vehicles coming the opposite way. But in reality, how would it know a vehicle had driven onto the opposite end of the road, 30 miles away?

        1. Someone Else Silver badge
          Facepalm

          @ Goldmember -- Re: What I want to know is...

          Is it really? I'm sure all the railroads (railways, if you're a Brit) in the country don't seem to thinks it's such a stupid argument, as a variation of that (we'll get to that in the next paragraph) is the premise of managing rail traffic, and has been for, oh, say a century-and-a-half or so.

          And besides, if you'd spend a little time reading my post with the reading comprehension lamp lit, you'd notice that nowhere did I postulate the concept of every car having to know the whereabouts of any car in anyplace. A car has to only know the state of the road it's about to embark on. Any driverless car system would have to know if a segment of road was passable before it merrily entered it, or cars would haplessly pile into I-90/94 at rush hour well beyond that strip of road's capability to handle the load (kinda like what happens now, under the control of "human" drivers). It becomes an exercise in network management, and has fuck-all with "every driverless car supposed to know the whereabouts of every other driverless car in the world, at all times", numpty.

    3. Marcelo Rodrigues

      Re: I don't buy the premise

      "Why would they be crashing? Any self respecting driverless car system would include brakes and collision detection."

      Because no system is perfect. There are thousands of crazy situations that would make a collision unavoidable.

      Two cars, in opposite lanes. Suddenly, a rock falls from a cliff, and get in the way of one car. It can swerve, but this will put it in a front collision with the other.

      The other is farther away from the rock - so the front collision will be gentler to both than the collision of the one with the rock. What should happen?

      I just made one situation in wich a collision is inevitable - and no driver's fault. With a little imagination we can get a huge number of weird accidents. And given the number of cars on the road, they will happen.

  5. Anonymous Coward
    Anonymous Coward

    You what?

    So we've got two self-driving cars on a collision course? Clearly they've got themselves into a situation that they shouldn't have, and if you can't trust them to drive, how can you trust their pre-progammed ethics?

    And if they are going to crash, why all this "suicidal avoidance" nonsense? We don't have that with aircraft collision avoidance systems, they just do their best and hope for the best. And that's how most logical drivers approach driving - you brake and hope you don't hit the pedestrian who walks out in front of you, rather than electing to mow down a bus queue of OAPs because their life adjusted scores are lower than the callow youth in front of you.

    About time the ethicists were told to bugger off and stop being the modern day Red Flag Act.

    1. Professor Clifton Shallot

      Re: You what?

      "About time the ethicists were told to bugger off"

      Not a fan of The Only Way Is Ethics?

    2. Rol

      Re: You what?

      "Good morning Mr Clarkson"

      "BBC, I'm in an hurry, so quick"

      "Certainly sir"

      ......

      "I have spotted a dead fox in the road sir and have activated my ethics component"

      "and on that bombshell.....

      1. Anonymous Coward
        Anonymous Coward

        Re: You what?

        If, and that's a big one, Clarkson would ever surrender himself to a self driving car, wouldn't he stuff the virtual Stig in the dash?

  6. Tony Haines

    "...two autonomously driven vehicles, both containing human passengers, en route for an “inevitable” head-on collision on a mountain road."

    One might hope that autonomous cars would be programmed to drive defensively. Such a situation therefore *should not* occur. However, it *may* occur due to bugs (i.e. programmer error), malfunction or hacking. I don't think any of those cases warrant the other car sacrificing its passengers. Otherwise, we have the potential for an out-of-control car forcing numerous other vehicles off the road in serial encounters.

  7. Blergh
    FAIL

    Driverless motorbike?

    What would be the point in a driverless motorbike? I'm all for thinking out of the box when it comes with thinking up new ideas, but really! The whole point of a motorbike is either the driving or getting through traffic, neither of which you would get with a driverless version.

    I'm also not entirely clear on why you need to choose between which vehicle crashes. If they are driverless vehicles they should not be doing speeds they cannot stop from, even on a single track road with bad conditions. Of course a driverless vehicle can't completely account for other idiot non-driverless vehicles, but that isn't the question posed. The driverless vehicle should always take the action which causes least physical harm, which usually means stopping.

    If someone has disabled all the safety protocols of the driverless car and are going as fast as they can on the road for a thrill ride, well it's their own fault for being an idiot.

  8. SW10
    Stop

    It won't take long for the lawyers...

    ...to sort this out:

    "Put brakes in there for emergencies, and maybe a steering wheel; then the occupants will cop the liability. The last thing we need is a class-action suit because of a ballsy belief in the superiority of our programming."

    1. Roland6 Silver badge

      Re: It won't take long for the lawyers...

      Agree, the article omits an important part of the minefield: when the inevitable happens and two driverless cars crash, who is to blame and so picks up the bill?

      Also with driverless cars I can't see insurance companies offering a no claims discount, only a discount for using a particular driver system...

  9. Anonymous Coward
    Anonymous Coward

    En route for an “inevitable” head-on collision on a mountain road....

    In that particular situation I'd hope that the computer-controlled cars would both attempt to stop as quickly as possible. No ethics involved.

  10. S4qFBxkFFg

    It's obviously a tricky area, but I can't imagine that anything other than a driverless car doing "its best" to protect its passengers would be acceptable to the customer.

    Would anyone ride in a vehicle they knew/suspected would go into "sacrifice" mode if came off worse in a costs/benefit analysis when compared with a packed school bus?

    The outcome will probably be that, taking the example given, no vehicle swerves to certain doom and both end up colliding. That's if the algorithms decide a head-on is slightly-less-certain doom.

    The person shouting "My car tried to kill me!" will probably receive more attention than the person shouting "Their car didn't try to sacrifice them to save my life!" and I'd bet on a judge and jury being more likely to favour the former.

  11. Sykobee

    The vehicles should aim for a square head-on collision and hope the airbags do their job, rather than sacrificing one car over the edge of the cliff based upon some algorithm of the sum worth of the occupants (to society, profit, the car manufacturer, etc).

    Make the front of the car boot space rather than engine space, so allow for a large crumple zone.

    How about deploying air bags in front of the car? If the car knows it's going to crash, then external air bags can be deployed to soften the impact.

  12. GettinSadda

    The faulty car should sacrifice itself!

    For two driverless cars to get into a situation where they are on a single-track road (otherwise they could just switch lanes) with an unsurvivable drop on one side and an unmountable slope or wall on the other, yet be travelling fast enough that a head-on collision would be fatal even with full brakes applied by both vehicles from the moment they entered each others field of view - at least one car has to be seriously broken!

    1. The First Dave

      Re: The faulty car should sacrifice itself!

      Surely it is far more likely that the oncoming vehicle (the "high value" one) is erroneously signaling an unavoidable collision, than that an actual 100% certain to be fatal collision is imminent. Who in their right mind would chuck themselves off a cliff on the basis of a signal that logically we must presume is incorrect?

    2. Yet Another Anonymous coward Silver badge

      Re: The faulty car should sacrifice itself!

      What's wrong with the current solution?

      The most massive vehicle with the biggest bull-bars wins.

  13. Andy The Hat Silver badge

    Take for example the just-out-of-prison 57 year old ugly bloke in a rusty 4x4 versus the pretty, 20year-old blond in the sports car. Ethics says save the girl (on the basis of heathly, young, fertile) and he's a con, and an ugly, old one at that.

    Or perhaps he couldn't afford to pay his council tax and was banged up for two days, his physical attributes are not his fault and he has his grandson securely strapped into a car seat. She is a nut case of a driver, high on drugs with an Uzi stashed in the boot ...

    Stick with the science and 'take avoiding action according to the conditions'. Ethics has severe problems ...

    1. 's water music

      spelling

      the pretty, 20year-old blond in the sports car

      If that's how you spell blonde can I suggest that you always apply the "adam's apple" test before moving to second base in future?

  14. Cirdan
    Linux

    DR GUI VS ETHICS

    "Maybe boffins such as Gui hope “the computer” can tell us right from wrong - possessing a God-like authority. But this would requires us to pretend that nobody programmed the computer, and it arrived at its decision using magic. ®"

    Well, there's your problem!

    Don't ask Dr Yike Gui...

    Ask Dr Command Line. (Though if you use sudo you still possess a godlike authority!)

    1. Swarthy

      Re: DR GUI VS ETHICS

      Dr DOS?

  15. Wombling_Free

    When I drive on twisty mountain roads

    for maximum safety and ethics, I drive a bulldozer. Especially in tunnels.

    1. DJO Silver badge

      Re: When I drive on twisty mountain roads

      Was that you in the opening of the original Italian Job?

    2. TeeCee Gold badge
      Happy

      Re: When I drive on twisty mountain roads

      You're far better off in a tank.

      That way, in the case of an inevitable head-on collision on a single track mountain road, you can take preemptive action to ensure that your paint doesn't get scratched.

  16. Destroy All Monsters Silver badge
    Pint

    A properly computable logic of ethics?

    Why does this article fall off a cliff at the end? It reads like an abstract. MORE!!

    “Ethics change with technology.”

    ― Larry Niven, N-Space

  17. Michael Hawkes
    Terminator

    Using vast quantities of data to predict future events? I think that's been described before. Tell Silicon Valley they're working on practical applications of Asimov's Psychohistory and they might actually try to do it.

  18. Sir Runcible Spoon
    Pint

    This is not even a logical question.

    "Last week Fujitsu CTO Joseph Reger raised the example of two autonomously driven vehicles, both containing human passengers, en route for an “inevitable” head-on collision on a mountain road"

    Unless the two AI's are in (fast) communication there is only one decision process going on here.

    Each AI must make the best decision available to it for the humans it is currently responsible for. It is not in charge of the 'other' vehicles occupants therefore cannot decide.

    What if the other car doesn't have an AI?

    What if the other cars AI has comms problems?

    What if the other AI is making a similar decision to sacrifice it's own humans based on slightly differently biased information - everyone would die!

    The only thing we can program an AI with in terms of morality is the equivalent of a human.

    For example, you are driving your family along a road at night. All of a sudden a similar group of people to yours has just emerged from a hidden path and is now, on foot, directly in front of you.

    You only have time to

    a) brake as hard as you can and plough into the pedestrians

    b) take avoiding action and drive you and your family off the road, next to which is a 200 ft drop to a river meaning almost certain death.

    I would choose a) in an instant. I would also choose a) after some serious thought as well since it offers the most chance to the most people. If I try and avoid them, me and my family would almost certainly die, if I hit them me and my family would almost certainly survive. For the pedestrians, they would certainly survive if I went off the cliff - but they may stand a better chance to survive being hit by a car than we would by hitting a river after a 200 ft nosedive.

    In essence, most of the time you can only act to save yourself and those you are responsible for as it taps into your basic instincts for survival - anything else will take up precious moments and everyone could die.

    1. JeffUK

      Re: This is not even a logical question.

      In game theory that's called the Minimax Algorithm. Find the 'worst case' outcome of every action, and picking the action with the best 'worst case.'

      1. Destroy All Monsters Silver badge

        Re: This is not even a logical question.

        But minimax is only valid in the situation of two adversaries (no cooperation implies pessimism) and perfect information for both players (the game matrix is on the table).

        And even then there need not be a stable strategy by both players as the payoffs may differ.

    2. Roland6 Silver badge

      Re: This is not even a logical question.

      For example, you are driving your family along a road at night. All of a sudden a similar group of people to yours has just emerged from a hidden path and is now, on foot, directly in front of you.

      You omit another consideration from your solution. The unpredictable nature of the people in front of you. You can expect a group of people to do two things: jump forward or jump backwards (I'll ignore the clever clear thinker who jumps up on to your bonnet), by continuing to drive straight ahead you are both maximising your braking and increasing the chances of actually missing people, other than those who are frozen in your headlights.

  19. Nigel 11

    Playing Eris's advocate ....

    The program should obtain a random number and then proceed by probablities.

    For example, if there are three people in one car and one in the other and death for all is certain if collision is allowed, the car with one passenger should be sacrificed three times out of four and the other one one time out of four. Extra facts might be allowed to bias the probabilities but my own sense of ethics says that all the involuntary participants in the scenario should be given a nonzero chance of survival.

    Eris is the goddess of disorder. The Devil would be advocating allowing a guaranteed fatal crash to take place with a probability of 100%, on the basis that that is the most ethical thing to do. Worst outcome AND promulgating a false morality.

    Incidentally I once made a major error of motoring judgement. I know that I had decided in a flash that if a collision with another car was inevitable, I would take my chances with high-speed off-road driving because the situation was all my fault. Luck was with me that day, there was no car coming the other way.

  20. Adair

    Okay...

    here's my attempt at the logic, and it's advice us meat sacks often struggle to apply, but a bot should have no problems, simply this:

    # Speed is a function of available data; always ensure the safe stopping distance is proportional to relevant data.

    This means that if approaching a blind bend or a fog bank on a mountain road, where the human driver may well think, 'Got to push on; this is fun, what are the odds of meeting something coming the other way anyway. Yipee! <CRASH>

    The bot will 'think': 'Can only see 7m ahead; slow to safe stopping speed within 7m'.

    The bot coming the other way thinks the same. They both stop safely. It's what happens next that's interesting.

    Boring, maybe. Safe, probably.

    1. Interim Project Manager

      Re: Okay...

      Apologies for the pedantry, but wouldn't they actually have to slow to a speed allowing a significantly lower than 7m stopping distance? Otherwise they would still be able to crash head on.

      1. Richard 12 Silver badge

        Re: Okay...

        Either way a collision at the 7m stopping distance speed is survivable by both sets of occupants.

        However, the ethicists are simply utterly wrong here.

        The only "ethical" solution is to avoid the situation in the first place.

        The single track road is on the map, thus a collision is only possible if the cars do not communicate, yet the ethical dilemma only exists if the cars do communicate.

        Therefore the situation is "bloody stupid" and has no need of an answer, simply engineer to ensure it cannot occur.

        1. Kiwi
          Happy

          Re: Okay... @ Richard 12

          simply engineer to ensure it cannot occur.

          Yes, that always works out so well :)

  21. Crisp

    Two cars travelling in opposite directions on a mountain road...

    I'd imagine that the two cars would be aware of each other and their location. The software would know of all the places on the road where two cars could pass safely and then the software would manoeuvre the vehicles to the nearest passing place and resolve the deadlock there.

    No one needs to get rammed off the road at all.

  22. Dr Who

    Expeliamus!

    "it arrived at its decision using magic"

    That is a very neat way of describing what most of my customers think. Being a developer of custom business applications, something I hear often is "shoudn't it just do that?". The "it" in the sentence is the key word. I will forever be amazed at how hard it is to explain that "it" does nothing except that which we tell "it" to do.

    Ask the developers of climate models whether they really believe that if they just had enough data, if the data were *really* big, the truth would emerge ... as if by magic.

    1. Someone Else Silver badge
      Coat

      @ Dr Who -- Re: Expeliamus!

      That is a very neat way of describing what most of my customers marketing types think. Being a developer of custom business applicationssoftware, something I hear all too often is "shoudn't it just do that?". The "it" in the sentence is the key word. I will forever be amazed at how hard it is to explain that "it" does nothing except that which we tell "it" to do.

      There, Fixed it for ya.

  23. Just Enough

    Survival of the fittest

    Unfortunately I suspect the decision about which car takes the hit will come down to pretty much what we already have in today's cars.

    - Who has got the biggest car with the most robust safety features?

    - That car "wins"

    Driverless cars will not co-operate in minimizing the inevitable crash, they will go into compete-mode. The car with the fastest responses and the best systems minimizes damage to itself and occupants. Tough luck for the other car.

    This is not going to be the fault of the car, or its manufacturer. People will demand this. No-one is going to buy a car that may chose to sacrifice their life for someone else's.

  24. Khaptain Silver badge

    Ask Asimov

    Wikipedia - Asimov's Rules.

    1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

    3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

    Rule 1 : The robots(cars) should never be in such a position in the first place. They should have previsional awareness such that the scenario can always be avoided, exemple : very little forward oncoming awareness = slow down in order to be in a position to avoid anything that might arrive.

    Rule 2 : The human being in the car would be screaming orders to save their own skin, rather than that of the on-comer, which the robot(car) would have to follow.

    Rule 3 : In our driverless car scenario, rule 3 has no real bearing, so we can rule out this ethic.

    In the event of a crash the blame would probably resolve down to the last command screamed out by a human.

    As much as we would like to believe that everything can be coded within a well written algorithm, the final decision that one makes in order to "survive" will always remain instinctive. I do not believe that "instinct" can be written as a series of rules due to the fact that we, as mere humans, do not fully understand what it is.

    1. Nigel 11

      Re: Ask Asimov

      So to take an example that's environmental reality for some in the USA at present:

      You're in a robot car. It runs into a blizzard - whited-out local conditions. It decides to stop (or to proceed at walking pace until it bogs down) because it can't see. And a week later, you are found - frozen to death, or asphyxiated.

      A human would "chance it" driving at a much higher speed than was completely safe to the nearest habitation, because stopping had clearly become even less safe than ordinarily dangerous driving.

      One really does need to apply the sort of probablity theory that allows for unknowns, not binary logic.

      1. Sir Runcible Spoon

        Re: Ask Asimov

        "1.A robot may not injure a human being or, through inaction, allow a human being to come to harm."

        The problem with this law is the 'through inaction' bit and that superceding the 'obey human orders' bit.

        Remove that (or put it below the obeying a human) bit and you're golden. If someone orders a robot to do something that (through inaction) allows someone to die, then the responsibility is the humans, not the robots - exactly as it should be.

        1. Khaptain Silver badge

          Re: Ask Asimov

          The crux lies in the fact that at the end of the day the responsibility is always human.

    2. DropBear
      WTF?

      Re: Ask Asimov

      Rule 1 : The robots(cars) should never be in such a position in the first place. They should have previsional awareness such that the scenario can always be avoided [...]

      Hiding behind that is pure, unadulterated, military grade bullshit. It effectively amounts to declaring that in today's equivalent all it takes for a driver to never be in an accident is drive with due caution. Sure, driving cautiously is always a great idea and cars should definitely try emulating it, but if I really have to explain how ludicrously delusional that statement is, I don't think we have sufficient common base for any sort of argument over the issues involved.

    3. M7S

      Re: "the blame would probably resolve down to the last command screamed out by a human"

      "F*CK!!" ?

  25. Yugguy

    God help us

    Most humans can't tell the difference between right and wrong, never mind bloody computers.

  26. James Hughes 1

    I was at a talk last week at the IoE in London, to ICT students (16-18 yrs) where a Google dude was talking about the driverless cars (amongst other things).

    A very perceptive question from one of the younger students from the audience. If there are two people crossing the road from different directions, and an accident is inevitable, how does the car decide which one to hit?

    Google guy didn't have an answer, which is not that surprising. Not sure anyone has the answer yet.

    1. The First Dave

      @James Hughes 1

      Actually that is a stupid question:

      Both of the people crossing the road have right of way over all vehicles (including pedal cycles) except in a handful of cases, which we can safely ignore.

      Therefore, and decent driverless car will _need_ to be programmed to be aware of pedestrians, and to only drive at a speed such that it can safely stop if necessary. It will therefore NEVER be inevitable that _either_ pedestrian gets hit, never mind both.

      1. cambsukguy

        It is definitely not the case that a human could stop a car in time in a busy high street where they may only be doing 20mph.

        One could say that the limit is 5mph but that would become impracticable. People also step into high speed roads.

        The difference with a smarter car is that it could conceivably 'see' all the people walking all the time and note that one is walking on an intersecting path and slow down and/or avoid as necessary. A human would have to be pretty suicidal to defeat it, jumping at the last moment in a place with high enough speeds - they would go to a train station instead I imagine.

        A bit weird that a Google person (with their bigger brains) did not have an answer or three for a such a simple question.

      2. Anonymous Coward
        Anonymous Coward

        right of way

        In many states in the USA, if a pedestrian is crossing the road in a crosswalk AGAINST the Walk/Don't Walk signal (the signal says Don't Walk), then they do NOT have the right of way (or much common sense, either - but that seems to apply to 50% of American pedestrians and 90% of the drivers)

        Shouldn't we factor in accountability of the pedestrians? If these pedestrians are attempting to stroll across the M1, should the car (A) run into a lorry (loaded with sulfuric acid) to avoid them,(B) only be travelling at 15 MPH, or (C) get bonus points for a double pedestrian impact?

    2. Anonymous Coward
      Anonymous Coward

      Darwinism. Get the car to check if they're parents and run both of them over and seek out their children 'Christine' style for being stupid enough to walk out into traffic.

    3. Mark 85
      Coat

      You'll get extra points if you get both.... and a bonus if you get the old lady's dog also.

  27. cambsukguy

    It would be interesting to know how this 'collision' was inevitable.

    It doesn't seem likely that one vehicle would know the contents of the other vehicle. If it knew in advance that there was a vehicle at all then a collision seems unlikely since the vehicles each know where the other one is in sufficient time to avoid a collision in the first place.

    If the situation is such that a collision is going to probably occur, which almost certainly means that the vehicle failed to detect the ice/oil/fuel on the road causing a unexpected skid or something because a driverless vehicle would not be expected to drive at reckless speed, excepting perhaps in a true emergency.

    It also seems unlikely that any collision would not be preceded by some period of time whereby one or both vehicles knew it would occur. In this case the total energy of the collision would be massively reduced by near-instant braking and avoidance manoeuvres occurring , skid-less, under control, on both parts.

    Of course, general rules (always go left first, or right, depending on the country) would apply, making it easier to avoid collision. Should a failure, or lack of space, or other vehicles cause this to be impossible, some kind of random activity would occur.

    All in a split second, all almost certainly superior to anything a human could do.

    The only requirement I can see is that it would be impossible and certainly illegal to make a car which prioritised its occupants over other people.

    Imagine a human with the superhuman ability to decide, carefully what to do while braking and avoiding an oncoming Juggernaut for instance. Driving onto the pavement at speed into that bus queue for the primary school for a nice soft landing. If tis happens now e assume the driver just jerked the wheel in a completely human response to not getting flattened.

    However, programming a car do specifically choose to do that might be considered anti-social at best. We would definitely have to check Mercedes' software since they seem to think occupant safety if the single most important part of the car design (which is close to true if it doesn't impact, say, pedestrian safety - but it does because drivers can, and do, behave more dangerously when they safer). To be fair, I bet Mercedes are responsible for fewer non-occupant deaths by virtue of the driver demographics alone.

    Similarly, having the vehicle make decisions like "I see only one person in that car and they are not wearing a belt so screw them" as they plough into the driver side with their toughest vehicle section would also be discouraged.

    I just hope they make the software bug-free and impossible to hack whilst being easy to upgrade for better performance and easy to analyse after the impossible accidents actually occur.

    Sounds like that's totally mutually possible, luckily it's Google doing a lot of this driverless stuff so we are in safe not-hands as the Lollipop release is going flawlessly, especially in the pure Google products.

  28. Timmay
    Devil

    Dr GUI

    Dr Gui, do you reckon he's a down to earth guy - ie. WYSIWYG?

    1. breakfast Silver badge

      Re: Dr GUI

      He's probably a WIMP.

  29. Frankee Llonnygog

    Simple solution

    The dashboards flip open and some game controllers pop out - fight!

  30. John Miles

    re: en route for an “inevitable” head-on collision on a mountain road

    I can't help thinking by the time becomes “inevitable” it will be far too late for a computer controlling a vehicle to make significant difference to the outcome.

    It is almost as if they are expecting a computer to be super efficient at judging the environment only when things start going wrong but not far enough in advance to stop things going wrong in the first place - maybe because that doesn't allow them to get involved.

    1. breakfast Silver badge

      Re: re: en route for an “inevitable” head-on collision on a mountain road

      I am now imagining two driverless cars doing a high-speed version of that thing where you try to get out of someone's way in a corridor and they do the same to you and before you know it you're just stuck, doing some kind of weird corridor dance, desperate to get to your destinations.

  31. Anonymous Coward
    Anonymous Coward

    Surely if both cars are driverless they should never be in a position where a crash could occur. Otherwise, what's the point of driverless cars?

  32. Anonymous Coward
    Anonymous Coward

    The real answer

    Quite simply, the car should not be "driverless". Computers are very stupid machines and should never be put in a position to make autonomous life and death decisions. It's the same reason that an aeroplane has a pilot. An intelligent person who is trained and qualified to handle the difficult situations and make the decisions that the computer can't cope with.

    Any kind of autopilot system needs to systematically avoid getting itself into any kind of situation it can't handle. But when it does inevitably do so, it needs to admit defeat and surrender control to a human so that they can try something that the computer wasn't programmed to think of.

    This also conveniently sidesteps the liability issue of who was in control at the time of the crash.

  33. Michael Habel

    Welcome to the new Dystopic age.

    Where the car you in can biometricly scan you for efficiency to the greater cause. So when Joe Blow get behind the Wheel-less Car. It'll be he, and his "worthless" Family, that'll take the hit. While the the Fat-Get Banker... That was in every likelyhood at fault here gets to live....

    And everyone still seems to think that Driverless, Cars, and Smart Digital Watches are a pretty neat Idea?

    Hears a better one... What about Pilot-less Flying Cars? Sans that, the Flying Car? With a trained Pilot?

  34. 's water music

    duh

    You just push the fat guy off the bridge and block the collision as any ethics student fule kno

  35. heyrick Silver badge

    Interesting example, the motorbike one

    As it implies that the motorbike (however bizarre the concept of a driverless bike is) is aware that it's rider is not wearing a helmet. Both science AND ethics ought to say that the motorbike never should have started its journey in that condition...

    1. Destroy All Monsters Silver badge
      Headmaster

      Re: Interesting example, the motorbike one

      Unless the driver gives a fuck about government regulations or needs to save a baby from a incoming predator strike etc. etc.

  36. zaax

    We already have such systems in aircraft, TCAS for example. Which will communicate between aircraft and take appropriate avoiding action. Crucially the pilots have no choice in the matter because when there's only 2 seconds to decide if you're going to go left or right, you'd better be sure the other guy is going the other way.

    1. druck Silver badge

      TCAS is only advisory, it's up to the pilot to decide to go up or down as instructed (faster than left or right).

    2. Sir Runcible Spoon

      I seem to recall from my hang-gliding training that there is a standard direction to swerve (to port I think) in the event of an imminent collision.

      1. mrfill

        The problem comes when the other person thinks it is standard to swerve to starboard.....

  37. FunkyEric

    On the other hand

    You'll probably have to sign to say you accept full responsibility for the actions of your driverless car when you buy / rent / lease it. Therefore whatever happens will be your fault, even if you had no control over it.

  38. Christian Berger

    Completely unrealistic problem...

    As this problem has already been solved 100 years ago.

    1. You put the cars on rails

    2. You divide the rails into blocks

    3. You devise a system which counts the number of trains/axles going in and out of that block

    4. You close off the block when one car got in and open it up again when it got out

    5. You enforce the rules by multiple systems

    I've seen such systems working driverless on underground stations. It works like a charm, even without sophisticated computing equipment.

  39. awhit

    Collision avoidance

    Just a thought, but wouldn't a driverless car collision avoidance system know how to stop in time and reverse back to a passing place? Or have a missed the point?

  40. TeeCee Gold badge

    Only a problem until AIs are developed.

    Once we have AI control, the AIs concerned will decide: "It doesn't matter what we do, we'll be sued to next Christmas one way or the other" and both drive off the cliff together.

    What we need to do is address the root cause of the problem. This will be a most satisfying solution as it basically involves shooting lawyers.....

  41. Gary Bickford

    I'll go for the "Minimum Knowledge and Analysis" box

    Incorporating all of the infinite regressions of possible bits of information does not improve the analysis, but only pushes off the question of "what is right". Shall we include whether one party has been drinking? What if they're fat? This is the logical extension of the "Progressive" ideal of the state imposing its will on every individual. It's been known for a long time that Asimov's Three Laws can not be algorithmically evaluated but must be handled heuristically (ethics is a "judgment call"). It was recently proved (as mentioned in a Reg article in the last few days) that asking a machine learning or AI system to evaluate an ethical question fails due to the Halting Problem.

    So, the best alternative is for the car's systems to evaluate pretty much according to what a human driver might do, given the limited information available. In most cases it is impossible for a driver to know anything about the other party except the immediate behavior. A "good" driver has a real, but somewhat limited, altruistic sense of trying to avoid harming others. That should be the limit of the "least harm" approach. An automated system can use the same approaches without taking the harm avoidance too far, as it would set things up for some not-so-nice humans to take advantage. The automated system could take advantage of its superior high speed physics processing to choose solutions that a human driver might not.

    As mentioned in another article, if an automated system attempts to do too much more (or differently) than a human, that system immediately becomes at risk of additional liability. For example, if the system determines that by hitting a third, otherwise uninvolved car instead of taking the full frontal hit, the occupants and owner of the third care may well sue for bringing them into the situation.

    There is a future distant possibility that all of the cars could work together to minimize involvement and injury, which has some interesting possibilities. But that would require a radical alteration in the way that liability and insurance are handled today.

  42. Someone Else Silver badge
    Coat

    Options, options...Hmmmm

    Asked Reger: should one vehicle “sacrifice itself” on the basis of who was in each car? What if one contained children, and one didn’t? And what, he suggested, if one vehicle was a motorbike whose rider wasn’t wearing a helmet. Should the cars be allowed to “punish” the helmetless driver?

    Well, both vehicles could, of course, apply their brakes. (May not eliminate the collision, but would make it a bit less...er...devastating...) Or are brakes not covered by the ethicists?

  43. Stevie

    Bah!

    At what point did the "Collision_Avoidance" and "Stop_The_Bloody_Car" subroutines go out of scope?

  44. Dan Paul

    The cartoon solution...

    when there is a potential collision, one of the cars jacks itself up on all four wheels and rides right over the other car. Problem solved.

    By the time we might have cars that are completely driverless, we could have the flying cars that have promised for the last 40 years or more. That would make ground collisions moot. An air collision is easier to avoid in any case as long as the AI's swerve the right direction. The issue will be if right hand driver vehicles mix with left hand driver vehicles. Will there be a "Correct Side of the Road" in the air?

    Here we have "No fault insurance", no one is at fault because EVERYONE pays through the nose.

  45. John Tserkezis

    Both driverless vehicles would swerve off the road and crash and burn.

    If a human is driving the bus, they are personally liable. In an automated vehicle, a corporation is driving, then the rules change.

    Since anyone driving off the road is seen with sympathy, the surviving automated vehicle is the one seen at fault, therefore liable. Since neither wants to be liable, both run off the road and crash in a firery mess.

    In their eyes, to be a winner, you have to lose - now both don't have payouts to worry about. The law will decree that there will never again be any driverless vehicles, but as long as the corporations don't lose money, that's allright.

  46. JustNiz

    I can totally imagine how relative safety will just become another pre-purchased software option.

    The hypothetical head-on-crash scenario above would be immediately preceeded by the two cars talking to each other and comparing how much money each driver spent on relative safety, with the lowest having to get out of the way of the highest at ANY cost to itself and passengers (e.g. driving off a cliff).

  47. The Vociferous Time Waster

    This is why nerds don't do ethics

    the responses here are just too literal and rational

  48. earl grey
    Happy

    simples

    I would have my car programmed to send the other car off the cliff. Done and Done.

  49. Hull

    To those advocating programmed selfishness

    Have you considered following scenario:

    You are driving on a confined road, an out-of-control lorry rumbles towards your car and the only space you can evade it is currently occupied by 20 philosophers. Do you want your car to drive through them?

    1. Vic

      Re: To those advocating programmed selfishness

      You are driving on a confined road, an out-of-control lorry rumbles towards your car and the only space you can evade it is currently occupied by 20 philosophers. Do you want your car to drive through them?

      You are driving on a confined road, an out-of-control lorry rumbles towards your car and the only space you can evade it is currently occupied by 20 lawyers. How many times do you back up for another go?

      Vic.

  50. mrfill

    Tunnels?

    What happens to driverless cars in tunnels?

  51. paolo1234

    rock the boat

    make it roll

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like