back to article Self-driving cars will be safe, we're testing them in a massive AI Sim

The British government this week unveiled plans for an ambitious AI simulator to be used to test self-driving cars. It's part of a stated mission to make the UK the world's leading destination for testing autonomous vehicles. The simulator, called OmniCAV, recreates a virtual version of 32km of Oxfordshire roads. "It's a …

Page:

  1. Anonymous Coward
    Anonymous Coward

    L5

    I'm 60. In 20 years time or less I want to be in an L5 car. It would be safer for you too.

    My dad gave up driving at 86. Even with today's level of automation I would feel safer with an automated car approaching than one with him at the wheel.

    1. Anonymous Coward
      Anonymous Coward

      Re: L5

      Even with today's level of automation I would feel safer with an automated car approaching than one with him at the wheel.

      Just don't try crossing the road pushing a bicycle, in front of one...

      1. Anonymous Coward
        Anonymous Coward

        Re: L5

        "Just don't try crossing the road pushing a bicycle, in front of one..."

        Your chances would be better. Much better. With my dad behind the wheel he would have relied on my mum telling him that there was a person pushing a bike in the road...… and my mum is partially sighted, and my dad is deaf.

        1. JohnFen Silver badge

          Re: L5

          "Your chances would be better. Much better."

          Indeed. In talking with people about autonomous vehicles, I frequently encounter the attitude that the car must drive perfectly or it's a failure not to be trusted. From a public safety point of view, this makes no sense. It doesn't have to be completely error-free, it just has to be better than people.

          1. Anonymous Coward
            Anonymous Coward

            Re: L5

            "Indeed. In talking with people about autonomous vehicles, I frequently encounter the attitude that the car must drive perfectly or it's a failure not to be trusted. From a public safety point of view, this makes no sense. It doesn't have to be completely error-free, it just has to be better than people"

            Well yes but then people seem to think that human drivers are generally poor when actually they are remarkably good. A very low percentage of journeys end in an accident and a small proportion of these involves a significnat injury. This is even more the case if you eliminate young male drivers from the statistics.

            There is a vast chasm between what is required for a fully autonomous vehcile and the state of current technology unless the environment is limited and highly controlled. At the moment humans are just much much better and will be for a long time to come.

            1. JohnFen Silver badge

              Re: L5

              "There is a vast chasm between what is required for a fully autonomous vehcile and the state of current technology"

              Oh, this is certainly true. We're a long way away from an autonomous vehicle that performs as well as, let alone better than, people do.

          2. Davidcrockett

            Re: L5

            True in theory but probably not in practice. People who drive dangerously and kill someone are sent to prison. If a million Google cars are on the road, drive dangerously infrequently and kill 500 people a year will we just all shrug our shoulders and say that's necessary for their development and hey at least they're better than human drivers. Nope, they'll be a clamour to ban the cars and jail Google executives.

          3. Brian 18

            Re: L5

            "From a public safety point of view, this makes no sense. It doesn't have to be completely error-free, it just has to be better than people."

            From a legal perspective, automated cars have to be significantly better than people. Otherwise the liability costs will kill these cars faster than anything else.

        2. disgustedoftunbridgewells Silver badge

          Re: L5

          I've been in a passenger with somebody like that, although the driver was only about 30.

          After nearly witnessing him ploughing straight through an old woman on a zebra crossing at 40mph, I said "never again".

      2. Justthefacts

        L5

        No. The Uber accident was not caused by “AI”, it was caused by Uber being an unscrupulous taxi operator, and spannering its safety system. Literally, that’s the beginning and end of it, any non-AI taxi operator acting similarly could have done this.

        1) They disabled the *standard* collision avoidance mechanism on that vehicle. Literally cut the wires.

        2) They wrote their algorithm explicitly, that if it could not classify an object it would do *nothing* and leave it to the collision avoidance. The one that they had cut the wires on. And not warn the safety driver.

        3) The sensors *did* observe the object, at a distance of over 100m, which is well beyond what a human would have managed, and with absolutely plenty of time to do something. Even alerting the driver would have been OK, they would have had about 8 seconds.

        4) Instead, they decided precisely that it was *either* a pedestrian or a bike, but couldn’t work out which. So it actively took the decision to do nothing. That is just manslaughter, pure and simple.

        5) Uber actively put in place process to ensure that their driver *couldnt* look at the road. Ther safety driver had to fill out paperwork; Uber knew this meant that they could not watch the road, and initially had a copilot, but then cut out the copilot to save costs. That is also just corporate manslaughter.

        This is not a story about how bad L5 AI is, but how evil and careless Uber specifically were.

        But it also useful to note: the AI sim in the article would have to be told the truth about whether the car itself had special systems. If the sim thinks they are there, and then the corporate bastards cut the wires in reality, we have the same problem. Will the sim simulate - the taxi firm running on MOT fail tyres? Or the wheels having a knock and the tracking being off? Or the cameras not being cleaned as per specified?

        1. werdsmith Silver badge

          Re: L5

          No. The Uber accident was not caused by “AI”, it was caused by Uber being an unscrupulous taxi operator, and spannering its safety system. Literally, that’s the beginning and end of it, any non-AI taxi operator acting similarly could have done this.

          Your description of events leaves me thinking that it could only have been a sabotage.

        2. Anonymous Cow Herder

          Re: L5

          No. The Uber accident was not caused by “AI”

          It was bad AI. Though the human contribution is not great

          Your point about Uber cutting wires is incorrect.The standard collision avoidance system is disabled in computer controlled mode as it conflicts with the more sophisticated sensors. It is enabled in human control mode.

          The NTSB write in their preliminary report:

          "The report states data obtained from the self-driving system shows the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact, when the vehicle was traveling 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that emergency braking was needed to mitigate a collision. According to Uber emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behaviour. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator."

          The cause of the accident seems to be that despite tracking the object for 5 seconds, the AI failed to accurately predict that it would still be in a collision course until it was too late.

          The human failing is in not implementing emergency maneuvers of any kind. It is not realistic to expect the human operator to intervene in an emergency - there simply isn't enough time to assess the situation and act (https://www.sciencedirect.com/science/article/pii/S1369847814001284). Google has already tested the assumption that the human supervisor will be constantly vigilant and concluded that the assumption is false (http://uk.businessinsider.com/larry-page-google-self-driving-car-autonomous-2016-9?r=US&IR=T).

          When it comes to autonomous vehicles, there is no halfway house. The vehicle has to manage come hell or high-water.

          1. Justthefacts

            Re: L5

            You seem to conclude differently, even though we are basing on the same NTSB report.....

            “The self-driving software classified.....”

            Let’s be clear: the “AI black-box deep learning” stuff is at the lower software layers.

            Above that, a human coder at Uber wrote lines of code that said “if don’t know, do nothing until have figured out where object will be”. Do you think these things aren’t code-reviewed?

            This wasn’t a “bug”, like a classification failure, or pointer exception. It was *the designed behaviour*.

            That *is Manslaughter*. A reasonable approach, and what would be expected of a human driver: “don’t know = take action that will avoid either possibility”; in this case, emergency brake.

            Uber have explained exactly why they didn’t do that, because they *already knew* the classification failure occurred too often in the field and would cause emergency braking too often. The problem is that Uber knew, and disabled the safe behaviour, because they wanted to test.

            *There would be no reason to disable the emergency brake while computer controlled unless they knew that the computer control often put it in dangerous circumstances. You would do it the other way round: low level emergency brake should disable computer control*

            This is *not* a failure of AI, any more than “Company handbook says that Uber drivers shall prioritise speed over human life” is a failure of paper.

            It was company policy to prioritise testing of the classification algorithm over human life. They wrote that down in black and white in their company handbook, which happens to be written in ‘C’ and handed to a computer for execution.

    2. Smooth Newt Silver badge
      Meh

      Re: L5

      My dad gave up driving at 86. Even with today's level of automation I would feel safer with an automated car approaching than one with him at the wheel.

      The market for autonomous level 5 vehicles isn't octagenarians. For the first decade or two it will be taxi and lorry drivers. The incredible cost of the vehicles and their sensors, and probably the insurance policies, will be offset by chucking paid human drivers onto the scrap heap. An autonomous taxi can "save"* the cost four shifts of human taxi driver since they can be used 24 hours a day, 365 days a year.

      * "save" in quotes because society as a whole will be paying for their dole and the social consequences of treating human beings as disposable commodities.

      1. Anonymous Coward
        Anonymous Coward

        Re: L5

        An autonomous taxi can "save"* the cost four shifts of human taxi driver since they can be used 24 hours a day, 365 days a year.

        Yep. The Robots will make 90% of us unemployed and therefore unable to afford the robo-taxi.

        Oh what a conundrum.

        Perhaps it is time for a new Ned Ludd?

    3. Steve Davies 3 Silver badge

      Re: L5

      Until Any level of automation can handle ALL roads in all conditions then sorry they are a big fail.

      I was thinking about this last week while negociating a narrow single tracked road on the Island of Mull.

      The road was very rough and had lots of grass growing out of the middle of it. It was also peppered with all sorts of Animal poo in various states of decomposition.

      How would the L5 system handle crossing of a Ford?

      How would it decided which passing place to stop and give way at?

      Could it stop to let vehicles behind pass?

      Driving requires multi level reasoning. Fuzzy Logic if you like. How does an AI 'expect the unexpected'?

      1. Hooda Thunkett

        Re: L5

        This is the big problem. Until you have Level 5 automation, it's all useless. It's like allowing a student driver to operate the car; you have to be paying close enough attention to take control when the AI doesn't recognize a situation is dangerous. If you're paying that close attention, then why aren't you driving the car?

        1. Davidcrockett

          Re: L5

          I disagree. If you could design an affordable vehicle that could drive itself along easy roads like motorways sales reps, the freight industry and pretty much anyone who frequently drove long distances would be queing up to buy one.

          1. Filippo

            Re: L5

            Indeed. If I could get a car that can drive itself unsupervised on a motorway, I'd buy it in a flash, even if it doesn't drive itself in a town centre or rural road. Unfortunately, I suspect that even that degree of automation is pretty far away.

            There are plenty of cars that can drive themselves on a motorway even now, but none of them can be trusted to do so without a human driver ready to take the wheel within a few seconds. That degree of autonomousness is useless to me, because I can't safely do anything else with that time anyway, so I might as well be driving.

      2. Justthefacts

        Re: L5

        Why? Most human drivers can’t.....

        For example, I haven’t forded either, and neither have most drivers. If we are wise, we would probably turn back if we saw that. Likely, that is what an L5 would do, as it would simply categorise the road as blocked.

        I’m comfortable on single track country roads and passing places.....but most London drivers will likely cause an accident if given the passing place problem.

        Unless you have a very wide range driving experience, you yourself would probably fail badly (possibly fatally) at one of - London right turns, driving on snow tyres in Scandi winters, Bangalore traffic

        1. Time Waster

          Re: L5

          Whilst no doubt this is true. Any L5 system on sale cannot simply refuse to drive down particular roads or in certain conditions. What if I buy this vehicle and live down such a road? Or jump in a taxi and it starts snowing? Or live in Bangalore?

          1. Ken Hagan Gold badge

            Re: L5

            "What if I buy this vehicle and live down such a road? Or jump in a taxi and it starts snowing? Or live in Bangalore?"

            1) You are a muppet. You'll have to garage the vehicle somewhere else.

            2) Tough. You'll have to get out and walk, or stay put until a different taxi comes along to rescue you. (In practice, this is no different from a break-down in a normal taxi.)

            3) You are a muppet. Not for living in Bangalore, which I'm sure is lovely, but for buying an expensive toy that you can't use. It's about as smart as living in Abu Dhabi and building a ski resort.

            I take the general point that the thing has to be 100% safe, but that doesn't mean it has to be capable of handling anything you throw at it. It just needs to be able to recognise when it is out of its comfort zone and refuse to go any further.

          2. Justthefacts

            Re: L5

            If you live down such a road, then don’t buy it....

            You will be in the 1% for whom this is the wrong car. For the other 99%, it’s great.

            For example, my friend bought a BMW (rear wheel drive) lives up a hill that is a snow hazard. She is a good driver by the way, but finds that hill lethal in the snow. She decided she bought the wrong car, and changed to a 4x4.

            Of *course* L5s can refuse to drive in dangerous circumstances. During the Beast from the East, half my friends stayed home. They *could* have made it with a lot of frayed nerves, but really would that have been a good idea.

            Yes, there are people who *must* drive in those conditions. They are in the minority.

        2. Ken Hagan Gold badge

          Re: L5

          "Why? Most human drivers can’t....."

          And every winter there are days when "Police advise drivers not to travel unless they really have to." which is a nice way of saying "Please don't add to the number of emergency call-outs that we have to deal with, you selfish muppet.".

        3. Smooth Newt Silver badge
          Happy

          Re: L5

          I’m comfortable on single track country roads and passing places.....but most London drivers will likely cause an accident if given the passing place problem.

          I don't know if you have visited London recently, but there are plenty of narrow streets with cars parked down both sides so that the space in the middle is far less than two car widths wide.

          But more to the point, human beings are very mentally adaptable. That isn't really true of AI, which can make very circumscribed judgements within tightly defined problem domains and based on previous training, but lacks any proper understanding of anything.

          1. werdsmith Silver badge

            Re: L5

            If you could design an affordable vehicle that could drive itself along easy roads like motorways

            That's my car. It does it very well, but I am required to monitor it and I find concentrating on what it is doing much more tiring and boring than actually driving, so I switch off the lane system. I still have forward emergency braking enabled though I've never allowed it to kick in, and the adaptive cruise control is a absolute dream and I would not like to go back to a car without it.

        4. Anonymous Coward
          Anonymous Coward

          Re: L5

          I've ridden in cars that were driven through a ford. (Shilton)

          There are things called maps and something call a "road sign" that tell you that there's a ford ahead.

          The additional challenge is determining water depth, which is something that autonomous cars will have to handle anyway for many areas that can be flooded in heavy rain.

      3. Allan George Dyer Silver badge
        Headmaster

        Re: L5

        @Steve Davies 3 - "How would the L5 system handle crossing of a Ford?"

        I would hope it would handle it exactly the same way as crossing of a Chrysler.

        Icon - obviously.

      4. Ken 16 Silver badge
        Coat

        crossing of a Ford

        Contina, Sierra or Transit?

      5. Anonymous Coward
        Anonymous Coward

        Re: L5

        Until Any level of automation can handle ALL roads in all conditions then sorry they are a big fail.

        Well, in the USA, I would pay a bit extra for a vehicle that, once I got onto the interstate, (I think you left-pondians call them restricted access roads?) would handle speed, staying in the correct lane, and slowing/stopping for traffic as needed to maintain safety, then beeped me to say "your exit is approaching". Then, I could disengage auto, and take back the control. Would make the long USA commutes much safer. I am not asking it to go "off-road", as your example seems to be very close to, even if you call it a road.

    4. Trigonoceps occipitalis

      Re: L5

      "Even with today's level of automation I would feel safer with an automated car approaching than one with him at the wheel."

      I want to die in my sleep like my Granddad - not screaming in terror like his passengers.

      1. Frumious Bandersnatch Silver badge

        Re: L5

        And I wonder if we'd be so quick to cut down trees if they screamed? I think we might if they screamed ALL. THE. TIME. FOR. NO. GOOD. REASON.

  2. Steve Medway

    Lets have a bit of honesty when it comes to discussing level 5 automation.

    Level 5 doesn't even come close to cutting it in the real world. It's not five or even ten years away but literally decades before any car can drive itself around Cairo or the arc de triomphe.

    Who really gives two hoots about a leafy Oxfordshire simulator?

    1. Khaptain Silver badge

      "but literally decades before any car can drive itself around Cairo or the arc de triomphe"

      A lot of people can't actually manage to do those things in a safe manner without putting others into danger. At least AVs would not be influenced by emotions, stress or fatique, thereby creating a safer environment..

      1. Rich 11 Silver badge

        At least AVs would not be influenced by emotions, stress or fatique, thereby creating a safer environment.

        You're assuming that AI doesn't also bring downsides equivalent or greater to than the more obvious human failings.

        1. Khaptain Silver badge

          "You're assuming that AI doesn't also bring downsides equivalent or greater to than the more obvious human failings."

          The difference being the AI will improve and retain it's "intelligence".

          1. Roland6 Silver badge

            >The difference being the AI will improve and retain it's "intelligence".

            Well given we aren't using true AI in cars, it won't improve unless it gets regularly updated like Win10. The only retained "intelligence" I suggest will be the owner specific such as routes used at particular times of day, driving style preferences, fuel/recharging stop preferences etc. which are probably covered by GDPR and are things an owner would like to transfer between vehicles.

      2. Alister Silver badge

        A lot of people can't actually manage to do those things in a safe manner without putting others into danger.

        The overwhelming majority of human drivers manage to drive safely most of the time.

        AV advocates seem to delight in painting human drivers as dangerous and unsafe, compared to their chosen deus in machina, but the evidence so far is that none of the current crop of AVs are as safe as the average human.

        1. Destroy All Monsters Silver badge

          AV advocates seem to delight in painting human drivers as dangerous and unsafe, compared to their chosen deus in machina, but the evidence so far is that none of the current crop of AVs are as safe as the average human.

          And won't be for some time.

          And when I hear self-driving, I want to see actual SELF-DRIVING. Like this: Terminator 2 Truck Chase Scene

          1. Allan George Dyer Silver badge
            Joke

            @ Destroy All Monsters - "I want to see actual SELF-DRIVING. Like this: Terminator 2 Truck Chase Scene"

            1) Driving in a restricted area without authorisation

            2) 00:07 collision with a stationary obstruction (car wreck)

            3) 00:22 not giving way when crossing a road resulting in near-collision, causing other road users to sound horn

            4) 00:27 collision with stationary obstruction (shopping trolley)

            5) 00:28 collision with stationary obstruction (wall)

            6) 00:33 collision with stationary obstruction (bridge)

            7) 00:39 throwing an object (broken windscreen) from a moving vehicle

            8) 00:44 collision with a moving vehicle (motorcycle)

            9) 00:51 driving without working brakes

            10) 01:08 collision with stationary obstruction (wall)

            11) 01:11 collision with stationary obstruction (other wall)

            12) 01:14 collision with stationary obstruction (first wall, again)

            13) 01:15 loading/unloading passenger when the vehicle is not stationary

            14) 01:18 collision with vehicle (motorcycle)

            15) 01:29 collision with stationary obstruction (bridge)

            16) failure to report an accident to the police

            I probably missed a few...

            1. Destroy All Monsters Silver badge
              Trollface

              I probably missed a few...

              As they say:

              "Problem, Officer?"

        2. Justthefacts

          Evidence?

          “None of the current crop of AVs”

          Google cars have currently driven 120million miles with zero fatalities, zero serious injuries,and a handful of fender benders.

          That is definitely better than human average for fender benders, definitely better than human average for serious injuries (by a factor of several), and no worse than human average for fatalities.

          It is not yet as good as a *good,experienced* driver. And it will be as good, when it has sufficient miles under wheels. Which is exactly the same as we say for the 17 year olds when they pass their tests, and we all shut our eyes and wish them good luck. Remember, it takes *ten years* of development to get a 17 year old to be as safe as a 27 year old!

          I think it’s the “human driving advocates” who are cherry-picking AV incidents and pointing them as unsafe.

          1. Alister Silver badge

            Re: Evidence?

            Google cars have currently driven 120million miles with zero fatalities, zero serious injuries,and a handful of fender benders.

            Maybe collectively they've managed to accumulate that number of miles, although I doubt it, but each individual car can't possibly have accrued that much.

            That is definitely better than human average for fender benders, definitely better than human average for serious injuries (by a factor of several), and no worse than human average for fatalities.

            Again, average cumulative statistics make a nonsense of this argument.

            There are millions of drivers around the world who have each driven for years and years without ever being involved in an accident. The statistics are slewed by the small minority of drivers who are incompetent or reckless. In contrast, there are a vanishingly small number of Google AVs and yet they have managed between them to accrue an impressive collection of bumps.

            Until an individual AV can match the record of an individual, competent human, then a fair comparison cannot be made. And this will obviously take a long time.

          2. T. F. M. Reader Silver badge

            Re: Evidence?

            @Justthefacts: "Google cars have currently driven 120million miles..."

            Citation needed. I got interested and checked (took me a few minutes). Waymo (Alphabet's autonomous vehicle arm that grew out of the X Lab project) reported 5M miles driven on public roads by February 2018. This is since October 2015, which on average means 2.5M miles/yr (if we assume that in the first few months there was a ramp-up from zero then we'll just count Feb 2016 through Feb 2018, OK for order of magnitude estimates?).

            As a baseline for comparison, there are more than 250M registered car in the US (mostly passenger cars) driving on average 15K miles/yr. This is 1.5 MILLION times more miles per year than the whole of "Google's fleet". There are 6.3M road-accident related claims per year involving something like 12M vehicles (the numbers are from 2015-2016 and seem to be broadly consistent with each other). So, let's take 6.3M as a proxy for the number of accidents per year, including everything from fatalities to fender-benders, regardless of whose fault it is. To claim better safety, Google/Waymo must show less than 4.2 accidents/yr.

            I found out that useful stats on Waymo accidents is not easy to unearth. E.g., Waymo's own "safety report" does not have the numbers, just the details on how hard they work on it. However, it is waaay higher than 4.2/yr. The graph here (some aspects look problematic, but it was easy to found) indicates something like 600-700 crashes per 100M miles (~30 over the 5M miles actually driven - seems correct as there certainly have been a few dozen reports) rather than ~168 (per 100M miles) that would put Waymo on a par with humans on average. The graph illustrates that the only two categories of drivers who are worse than Waymo are youngsters and very elderly (something that you point out, too, but then letting youngsters drive is the only way to make them safe drivers).

            This, by the way, does not take into account at all how many accidents have been avoided by mandatory humans taking control.

            So, for my money any claims that Waymo (in this case - and they seem to be the best in class, far ahead of competition) are already safer than humans are not confirmed at any level beyond handwaving. Statements like "94% of accidents are due to human error" are, by the way, pure handwaving: there is no one else in today's normal car who makes complex decisions, so the number is meaningless, apart from "car components don't break down and cause accidents often".

            1. This post has been deleted by its author

          3. Ken Hagan Gold badge

            Re: Evidence?

            "Remember, it takes *ten years* of development to get a 17 year old to be as safe as a 27 year old!"

            Actually, no, unless the development you are talking about is a stabilisation of hormones. Try getting two insurance quotes for "just past my test", one for a 17yo and the other for a 27yo. Compare the prices. *That's* what the actuarial evidence has to say about the 10 years of development.

          4. Mike 137

            Re: Evidence?

            If you want evidence of what is required for confidence in this technology being safe, see the US Consumer Watchdog report. I quote: 'Google/Waymo claims that its computer-controlled vehicles have logged 300 years of human driving experience. But the testing that would be required in order to match the safety tolerance of commercial airplanes is estimated at over one hundred millennia. A lower level of safety – “a level of 80 percent confidence that the robotic vehicle is 90 percent safer than human drivers on the road,” would still require 11 billion miles of testing (or about 5,000 years), according to researchers at the University of Michigan'.

          5. Loud Speaker

            Re: Evidence?

            Google cars have currently driven 120million miles with zero fatalities,

            Obviously, none of them in Lagos.

        3. John Miles

          Re: The overwhelming majority of human drivers manage to drive safely most of the time.

          They manage to avoid having an accident - I am not sure I'd count that as being same thing as safely - Nothing is without risk, so safe is a relative term but a lot of drivers push the risks higher than they need to be for very little, if any, gain.

    2. Roland6 Silver badge

      >Who really gives two hoots about a leafy Oxfordshire simulator?

      My first thoughts on reading about the "32km of Oxfordshire roads." was the US vehicle emissions testing regulations and the VW test defeat code...

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019