back to article Disengage, disengage! Cali DMV reports show how often human drivers override robot cars

Mercedes' driverless cars need human intervention approximately every 2.08km (1.3 miles), and other makes are totally reliant on frequent switching to manual, according to figures out this month from the Californian Department for Motor Vehicles. The "disengagement reports" (the times an autonomous car was taken over by the …

  1. Hans Neeson-Bumpsadese Silver badge

    These figures relate to California, and a lot of the stuff that I've read about driverless cars relates to tests being conducted in relatively sunny climes. I wonder if any info is available (or even how much testing has been done) for how well the tech copes in rain/sleet/snow/fog/British weather conditions?

    1. Bill M

      The British weather could be problematic, but they must have an auto shutdown that engages if my Auntie Maud is detected closer than 10 miles.

    2. Mark 85

      I'm wondering more how these cars would fare in say, New York City or even Los Angeles. My suspicion is that around Silly Valley, the traffic is pretty decent and well-mannered compared to those two places.

      Disclaimer: I've driven in both those cities but haven't had the pleasure yet of visiting Silly Valley.

      1. Solmyr ibn Wali Barad

        IMHO the ultimate test would be winter in Moscow. Good reactions won't be enough, you have to have some form of sixth sense to cope there.

        1. vtcodger Silver badge

          The ultimate test is possibly Winter driving in Boston which combines abundant snow and ice, an improbable road network, gigantic potholes, an attitude that traffic laws are advisory, not mandatory and drivers operating according to the prime commandment that the entity with the least to lose from a collision has the right of way.

          1. Anonymous Coward
            Anonymous Coward

            Right of way to the stupidest

            "to the prime commandment that the entity with the least to lose from a collision has the right of way"

            sound like North London food delivery scooter muppet's and Uber drivers have trained there

  2. Baldrickk

    I still think that maybe we don't have the full information.

    The article mentions the difference between manual and automatic switching of control back to the driver. How much of this is because of a different testing strategy, and how much is because of failure of the control system?

    Seems like Waymo is doing allright.

    I half suspect that once one company has cracked it, it'll become a de-facto (if not enforced) standard. The Marketing seems to write itself.

    I also quite firmly think that anything beyond what we pretty much have already with Tesla "Autopilot" but falling short of fully autonomous is doomed to fail as a product - purely because of easily distracted bags of meat not being ready to take over controls in case of emergency.

  3. TRT Silver badge

    I think the figures are fairly impressive. It would be interesting to see how they correlate the the actual capability of the auto-drive itself. I mean, once every 5 miles on a motorway for a semi-autonomous lane-changer point-and-shoot system is good but then less impressive than, say, once every 0.5 miles by a full A-to-B sat nav style autopilot operating in a dense, urban jungle scenario with traffic lights every 50 yards, Lycra-clad couri-kazi pilots cycling zigzag between the gridlock, phombies walking across your path in full-on Oxford Street style and coping with a street scene having more visual clutter than the painted record of a fight between Jackson Pollock and Jean-Michel Basquiat.

    1. tiggity Silver badge

      up-vote for the Jack the Dripper imagery

    2. John Brown (no body) Silver badge

      Depending on the levels of transparency and disclosure, the proposed 200 mile trip around the UK ought to be interesting in terms of manual interventions and/or control systems "giving up".

      You are right to point out that a manual intervention is not the same as the control system flinging its hands in the air and screaming for the meatsack to take over. Intervention implies the control system was about to do something bad and had to be stopped from doing it as opposed to "realising" it might not be able to handle an upcoming situation and requesting help. Both type need to be counted and assessed.

      1. Doctor Syntax Silver badge

        You are right to point out that a manual intervention is not the same as the control system flinging its hands in the air and screaming for the meatsack to take over. Intervention implies the control system was about to do something bad and had to be stopped from doing it as opposed to "realising" it might not be able to handle an upcoming situation and requesting help.

        Both your cases point to the fact that when the going gets tough the AI can't cope. As long as that remains the case perhaps you should reverse your vocabulary and refer to "bag of spanners" and "capable human driver"

  4. Richard Jones 1
    WTF?

    Override Idiotic Wetware Drivers Option Please

    Where I live it would be nice if some of the wetware drivers had an automatic switch over to something else. One road has a 30mph temporary limit and severe lane restrictions due to construction work just round a corner. It would be great if the drivers were overridden to recognise the No Overtaking (it is a single lane at this point) and the 30mph limit signs. My car has no problem automatically recognising both the limit and the overtaking restriction signs and displaying them on the display module.

    1. AS1

      Re: Override Idiotic Wetware Drivers Option Please

      At a very minimum, automatic speed limiters would improve compliance and reduce driver stress (especially if the proposal for prosecution at +1 mph is taken forward). Given the turnover of cars, within five years traffic would be self-regulating as regards speed.

      It would be a first step towards full automation, along with lane following and dynamic cruise controls that are already available.

      1. LucreLout

        Re: Override Idiotic Wetware Drivers Option Please

        At a very minimum, automatic speed limiters would improve compliance and reduce driver stress

        Personally, I don't want higher limits in town; For most urban streets 30 is fine, however, they'd be inevitable with enforced speed limiters. The only reason the roads move as well as they do is that most people ignore most limits most of the time. Before jumping to dispute this, try driving on any national speed limit dual carriageway at any time of day and stick rigidly to the limit. You'll notice you're constantly being overtaken by all manner of road users.

        You'd also have to automate "overtake mode" to eliminate the all too frequent situation where a vehicle restricted to a lower speed limit is dragging along 30+ cars in its wake because those at the front lack the skills to overtake safely or the knowledge to pull back and increase stopping distances such that the rolling road block may be passed in sections rather than one hit. I imagine that will scare the hell out of the first few passengers to experience it, especially when conditions change and the car has to retreat back into the stack after beginning an overtake. For this reason I imagine most bikers would resist having a limiter too.

      2. TRT Silver badge

        Re: Override Idiotic Wetware Drivers Option Please

        Automatic Speed Limiters should be a new class of vehicle on the driving license, and anyone who loses their license for speeding should, once their ban is spent, ONLY be allowed to make use of that class.

        Discuss this and the extension to full self-driving vehicles and implications of driving licences etc.

        1. TRT Silver badge

          Re: Override Idiotic Wetware Drivers Option Please

          Hm. At least two people on here have accumulated enough points through speeding to have lost their licenses, I see.

      3. Doctor Syntax Silver badge

        Re: Override Idiotic Wetware Drivers Option Please

        "At a very minimum, automatic speed limiters would improve compliance and reduce driver stress"

        And cause accidents every time a driver needed to accelerate out of a situation.

  5. Donchik

    Failed Logic

    Real issue with AV proposals is that full AV is still a distant dream and one that many do not even want.

    All the other partial solutions are based on the failed logic of having a human oversee a computer. Humans are incredibly poor at these kind of tasks, and such systems were long ago excluded as unsafe. Early aircraft autopilots being the clearest example of such failures with pilot coming to trust autopilots over looking out the window to see the ground approaching unexpectedly!

    Until full AV systems are available that do not require Human intervention the systems should only be driving aids, for example warning or intervening as necessary to support the driver. Computer support of the driver has great potential, but it MUST be a Human in control ultimately.

    1. Anonymous Coward
      Anonymous Coward

      Re: Failed Logic

      Unfortunately there is a problem with the use of computers (AV or what ever the latest buzzword is) to control cars - what happens is someone installs a program during one of the routine service calls that turns the car into a killing machine when certain conditions are met.

      There is a SF story that I read many years ago based on that idea with one of the triggering factors being a full moon.

  6. Anonymous Coward
    Anonymous Coward

    One intervention needed per 2km....

    So slightly safer than the average driver in the UK then?

    1. Rebel Science

      Re: One intervention needed per 2km....

      You're kidding? We would all be dead if that were true. Without intervention, the self-driving cars would suffer catastrophic accidents.

      1. Stoneshop
        Boffin

        Re: One intervention needed per 2km....

        We would all be dead if that were true.

        Errr, no. For every driver killed traffic density decreases, even if that's infinitesimally small at first. But with every such accident the traffic density, and with that the accident rate, will go down, with the accident rate asymptotally approaching the background level of 'immovable solid object fails to yield to vehicle'

  7. Rebel Science

    Fully Autonomous Vehicles Will Be Science Fiction for the Foreseeable Future

    Fully autonomous vehicles are way beyond what current AI technologies can handle. A major breakthrough in AGI must happen before we realize this dream. One thing is certain: it will not happen with Deep Learning. A deep neural net is really an expert system and, as such, it suffers from the same fatal flaw: it fails catastrophically every time it encounters a situation for which it has not been trained. This is unsuitable for real world applications where safety is a must.

    To all big time investors: Do not waste money investing in any project using deep learning to achieve full driving autonomy. It's a waste of time and money. Invest in AGI research instead.

  8. Anonymous Coward
    Anonymous Coward

    I can't see full AV either

    First problem it assumes maps are entirely reliable, second problem it assumes meatbags will not accidentally, or accidentally on purpose create accidents.

    Third problem is real life, potholes, collapses in road/neighbouring trees/cliff sides etc.

    Fourth problem is destination parking. do I expect to have my driveway (and any friends) driveways' mapped? What about the field I park in for the summer fete? Or the loading bay of the factory?

    On main roads, yes, nice idea, but the idea of thumbing down a jonnycab still appears a bit far in the mists of future time.

    Overall, though measuring the number of driver interventions as an indicator of success/safety is also entirely unhelpful, local road conditions, weather, poor junction designs, indoor car parks in GPS shade, can all be a factor as there is no consistent test conditions (for which I am grateful as it would reproduce the fuel economy cheating effects)

    Chasing the impossible dream, there probably should be a song about that...

    1. Seajay#

      Re: I can't see full AV either

      Firstly, does it assume maps are entirely reliable? I don't think that's true at all, they read road signs and markings (not perfectly at present but they're getting there)

      https://www.citylab.com/transportation/2017/02/how-to-teach-a-car-a-traffic-sign/516030/

      The second problem doesn't seem like a driverless cars problem to me. If a human decides they want to deliberately cause an accident by driving on to my side of the road immediately before we are about to pass in opposite directions there's not a damn thing I can do about it and there's nothing a driverless car can do about it either. I don't see how AVs make that worse

      Third problem is related to the first. That's only a problem if you assume that AVs navigate solely by GPS and maps, but that's not true, they navigate by GPS, maps, cameras and (maybe) lidar.

      The fourth problem is a fairly minor UI issue, how do I indicate where I want the car to go if there is no map of the area. Well either the car takes me to the nearest mapped road then puts me in to a semi-manual mode for the last 10m where I direct the car with a joystick and it continues to handle all the collision avoidance and actual control of the car, or I indicate on a satellite map where I want it to go, or something similar. Even if the answer is that no, you can't park anywhere that's not on the map, that's still not fatal for AVs, there just needs to be some way for you to get your own drive added to the map and you'll have to accept that you're otherwise going to have to park in car parks or on the road side.

      I can see some arguments for the fact that getting the last 20% of the way to AVs is going to need much much more work than the first 80% but I can't see anything which is a complete show stopper.

      1. John Brown (no body) Silver badge

        Re: I can't see full AV either

        "they read road signs and markings"

        Thanks to "austerity", road markings seem to be getting less and less visible these days. Re-painting costs money and councils don't seem to have any. I'm not sure I want to be in an AV on a dark rainy night. Not to mention when snow covers the markings up. Likewise, more and more road signs seem to be disappearing behind foliage. More "austerity" cut-backs.

        1. Baldrickk

          Re: I'm not sure I want to be in an AV on a dark rainy night.

          Darkness will not be an issue, at least not if the car is using Lidar - it doesn't rely on ambient light.

          Rain will be a problem, but only so much as it is to humans too - it's a physical obstruction.

  9. Jason Bloomberg Silver badge

    And for comparison...

    How often do passengers have to shout "Oi!" at a driver who needs to snap back into focus?

    I know I'm not the only person to have gone 'fully autonomous' with no recollection at all of parts of the journey I have just undertaken. Or had to take 'late action' because I hadn't fully absorbed the situation earlier.

    Why autonomous control is disengaged is important. If it's just because the system or meat sack wasn't confident about what was coming up and it was an informed decision that seems fair enough. If it's because disaster is imminent that's more worrisome.

    1. Threlkeld

      Re: And for comparison...

      A common experience. In 1947, Robert M. Coates wrote a science fiction story in the New Yorker about it, called "The Hour After Westerly"

      https://www.newyorker.com/magazine/1947/11/01/the-hour-after-westerly

      When automated driverless cars can write imaginative short stories, THEN we will need to worry.

  10. FelixReg

    Optimistic

    Read through these reports. In particular, Waymo and Cruise. They are logging the most miles and their trend is clear. The latest reported months have a lot more miles and a lot fewer disengagements.

    Cruise notes why they drive in Frisco instead of other places: It's a harder environment than suburbia or highways, so they learn faster.

    Remember, these guys *want* disengagements. Each disengagement can be gone over like an airliner crash. Replayed millions of times, varying the parameters. When you run out of disengagements, you have a problem learning, don't you?

    1. Seajay#

      Re: Optimistic

      This could be one of the greatest relative strengths of AVs. Once they're out there you don't necessarily have to wait for a crash or disengagement to learn. Here's an interesting blog post from Tesla on this subject. It describes how they use fleetwide learning to whitelist particular radar returns in particular areas to avoid false positives.

      https://www.tesla.com/en_GB/blog/upgrading-autopilot-seeing-world-radar

      As well as recognising specific items, presumably it could be used more generally and even in cases where there was no collision and no false positive causing unnecessary emergency braking. I.e. my car sees something which it assumes is a far away truck, as we get closer, it realises that it is a nearby van. It can send to Tesla, "image A was incorrectly classified as a truck", if Tesla get lots of those they can tweak the image classifier. Similarly on the control side it can say, I wanted to change course from A to B so I applied control input X. I actually ended up on course C and therefore applied correction Y to end up on course B. As far as the passenger is concerned nothing bad happened but my AV knows that its internal model of the car's dynamics must be wrong. They can send that to Tesla who can either say "Get your suspension / tracking / tyre pressures checked" or if they see it from lots of cars then they can change the control model.

      1. FelixReg

        Re: Optimistic

        From the Tesla report:

        Additionally, because Tesla is the only participant in the program that has a fleet of hundreds of thousands of customer-owned vehicles that test autonomous technology in “shadow-mode” during their normal operation ..., Tesla is able to use billions of miles of real-world driving data to develop its autonomous technology. In “shadow mode,” features run in the background without actuating vehicle controls in order to provide data on how the features would perform in real world and real time conditions. This data allows Tesla to safely compare self-driving features not only to our existing Autopilot advanced driver assistance system, but also to how drivers actually drive in a wide variety of road conditions and situations.

        Put another way, Tesla is Big-Brothering their cars and can conduct a Delphi Poll on what a good driver does in very, very many circumstances.

        1. Pascal Monett Silver badge

          To take the devil's side

          We are at the dawn of autonomous vehicles. I do believe that we need all the data now in order to be able to properly program the damn things for later.

          So yeah, Big Brother it may be, but that should help needing less of the Red Cross later on.

        2. Doctor Syntax Silver badge

          Re: Optimistic

          "Tesla is Big-Brothering their cars and can conduct a Delphi Poll on what a good driver does in very, very many circumstances."

          What it can't do is record why the driver did it if the Tesla system didn't register the problem. e.g. the driver recognises from the behaviour of a pedestrian that they're about to side-step off the curb and breaks in anticipation. The system will record the pre-emptive breaking followed by the entry of the pedestrian onto the roadway but the actual movement of the pedestrian will only be recorded as a random action at the time it actually happened. The driver, being a sentient being like the pedestrian, can see that the pedestrian is unsteady, is being confronted by another aggressive pedestrian or whatever and has sufficient understanding to realise what they, the driver, would do if they were in that situation.

          The critical word in the previous sentence is "understanding". That's the difference between man and machine.

          1. Seajay#

            Re: Optimistic

            True, AVs are unlikely to be able to get that degree of reading the intentions of pedestrians for an extremely long time. But how big a deal is that? If you're moving through at 30mph, an average human probably isn't going to spot that amount of detail either.

            For a human driver to have the the time to see evolving dynamics between pedestrians and the spare capacity to be watching them in the first place you've got to be doing what 10mph? Less? At that speed the stopping distance of an AV is just a few feet. So while the human driver might be better in the sense that they can spot it earlier and brake sooner, they're probably not better in the sense of actually preventing any accidents.

  11. Yavoy

    So because Mercedes disengages every mile, waymo's cars aren't ready?

    That seems to be the gist of the article.

    1. G Mac
      Childcatcher

      Paradoxically, yes

      Check this article about Air France flight 447 that crashed into the Atlantic:

      http://www.slate.com/blogs/the_eye/2015/06/25/air_france_flight_447_and_the_safety_paradox_of_airline_automation_on_99.html

      The gist is that because so much regular airtime is on auto pilot, when the system disengages the pilots are less able to assess and take over the plane because they are out of the loop and out of practice.

      So with a car that disengages every mile or two the driver will at least have retained most driving skills, and in fact will be pretty much waiting for the disengagement.

      On the other hand, if it disengages every 5,500 miles, the chances that the driver will be able react (and is even able to! nap time right?) would be pretty slim.

      After reading the Slate article, I came to the conclusion that for self driving cars it is a case of all or very little: either 0 disengagements, or so many that the driver is still pretty much engaged. Of course I could be pessimist about this.

      1. Yavoy

        Re: Paradoxically, yes

        Ignoring the fact that has nothing to do with my point, I still disagree with the 0 disengagements policy.

        People crash as well. So long as the rate of disengagements is less than the rate of crashes of the driver, the self driving car is still safer.

        1. Doctor Syntax Silver badge

          Re: Paradoxically, yes

          "So long as the rate of disengagements is less than the rate of crashes of the driver, the self driving car is still safer."

          That means we have a very long time to go before the self driving car reaches the standard of an inexperienced driver.

          1. TRT Silver badge

            Re: Paradoxically, yes

            20 years on from the advent of the fully automatic self-driving car, how will the next generation of learner driver acquire enough experience to be able to hold a driving licence?

            1. Baldrickk

              Re: Paradoxically, yes

              Why would they need one, if their car is fully self driving?

      2. katrinab Silver badge

        Re: Paradoxically, yes

        It's not so much that.

        In a manually driven car, you know what your car is doing at any moment in time because you told it to do it, and you know why you to told it to do it.

        In a self driving car, if you spot an problem, you have to figure out what the car is currently doing before you can take over, and that takes about 25 seconds. If you are driving at the speed limit on a British motorway, your car will travel about 940 meters in that time which is way outside your current field of vision.

  12. LucreLout

    Disparity in distance

    I wonder if the disparity in distance between interventions might be in part explainable by the time of day the vehicles operate (peak / off peak), and the type of roads and directions in which they're driven.

    Head into London on the M1 at 8AM Monday morning and you'll need more interventions per mile than heading along the local b-road at 3am Sunday morning.

    1. Andy 18

      Re: Disparity in distance

      From experience in a Tesla Model S this is actually the reverse of where we are now. I'd trust the car completely driving along the M1 in heavy traffic (I drove from Bristol to Sheffield on motorways around Christmas and only intervened to change motorway or go in to the supercharger). B roads just don't have enough information for the car to decide what to do (no lane markings, no edge markings, corners you can't see around, huge puddles, cattle, horses, unexpected stationary farm vehicles and so on).

  13. Stoneshop
    Headmaster

    Correction

    "While the above data shows that driverless tech is indeed still a while away "

    ITYM miles away.

  14. Helstrom

    GPS NIghtmares Made Worse?

    My 2017 vehicle still has maps from 2009 and there are no updates listed. When updated maps are available they will cost me $250USD. When I drive home my vehicle insists I'm off-roading at 120km/h because the highway was realigned 2 years ago. Unless mapping updates become instant and automatic I can foresee our autonomous future ending in tears...

    1. John Brown (no body) Silver badge

      Re: GPS NIghtmares Made Worse?

      Sounds like you bought from a manufacturer who cheaped out on a proprietary satnav instead of licensing one that gets updates.

      1. TRT Silver badge

        Re: GPS NIghtmares Made Worse?

        Mine is of a similar state, and it wasn't that it cheaper out, but that the system must accept inputs of wheel rotations, inertial sensors and steering angle etc etc to supplement GPS. At the time of manufacture (2006) there wasn't an alternative other than to design it yourself. I'm not sure even today that there's a commercial system that accepts and balances alternative positional information.

      2. Doctor Syntax Silver badge

        Re: GPS NIghtmares Made Worse?

        "Sounds like you bought from a manufacturer who cheaped out on a proprietary satnav instead of licensing one that gets updates."

        And if your car is useless without the frequent updates the vendor of those updates will have you by the balls every time your subscription is due.

  15. ratfox
    Paris Hilton

    While the above data shows that driverless tech is indeed still a while away from being fully reliable, none of the autonomous vehicle makers that we can think of are claiming to be so, and certainly don't intend to be any time before 2020-ish.

    I thought that Waymo was going to start a fully-automated service in Phoenix this year?

  16. User McUser
    Joke

    The "Kitty Hawk" moment

    If we define "self-driving" cars the way we defined "powered flight" after Kitty Hawk, then we're well past that point and have moved on to this: https://youtu.be/Sp7MHZY2ADI

    1. FelixReg

      Re: The "Kitty Hawk" moment

      Great video! Self-driving cars are way, way beyond those early flight efforts. Here's an ancient (1 year old) video from Cruise, put out to encourage talent to apply for jobs:

      https://www.youtube.com/watch?v=6tA_VvHP0-s

    2. John Brown (no body) Silver badge

      Re: The "Kitty Hawk" moment

      The two most amazing things I take away with me from watching that video.

      1. The number of people prepared to handle untested machines directly underneath spinning blades of death.

      2. The number of inventors who had obviously not tested their inventions AT ALL before inviting the press and crowds of onlookers.

      Having said that, I think I've seen all of those clips many times each over the years but not so many in one sitting so as to get a decent overview of the bravery (or stupidity) of human inventors :-)

      1. Baldrickk

        Re: The "Kitty Hawk" moment

        Have you considered that these were often the first actual tests that they ran? They would have been a spectacle regardless of whether they worked or not - and if they did, they would have wanted some proof.

        The only parts of that footage that appear to be at a proper show are the three shots of the same aircraft - the one flying into the mock house. It is notable that the design on that aircraft looks more like a properly viable aircraft. Indeed it's flight was stable, if on a collision course with a solid object that it appeared unable to avoid.

  17. TedF

    Sleeping feet...

    A change over from automatic driving to human driving will always be traumatic. Am I the only person who feels disorientated after being on cruise control for hours and having to wake up their feet as you leave the motorway?

    1. John Brown (no body) Silver badge

      Re: Sleeping feet...

      You must have adaptive cruise control. I can only imagine. Mine is set the speed and manual adjust as required type, and even on a long trip, rarely get to use it for so long without having to use brakes or accelerator more often than one might imagine.

      1. TRT Silver badge

        Re: Sleeping feet...

        Especially hills and dips. Just a couple of mph dropped up a long hill is enough that your set speed cruise will catch up a fixed accelerator manual driver scarily fast.

        1. Pascal Monett Silver badge

          My cruise control can brake if the speed goes over the set limit. That means that, if I'm on a down slope and there is no one around and no radars, I disengage the CC to benefit from the energy that gravity gives me, and I re-engage it when I'm on the upward slope before going below the set speed again, so my CC keeps me at the proper speed.

          So yes, I exceed the speed limit every now and again. When no one is there to be harmed.

          I am, of course, talking about driving outside inhabited areas.

          1. TRT Silver badge

            Mine increases the motor/generator recharge rate on the downhill so the set speed is maintained and uses the battery to boost back up hill. Only the steepest slopes down cause the speed to rise when the MG can't sink the energy away.

  18. PrometheusPB

    AI seldom makes mistakes, people do

    The trouble with self-driving cars isn't so much the AI in them, it's the unpredictablility of how stupid humans can be. While AI in general can predict how smart a human is, it has no reference for stupidity, because it is bottomless. It also does not account for willful malfeasance of other drivers, nor what is simply referred to as "insanity". Willful ignorance is at play as well, such as the driver who changes lanes without looking.

    AI is good, but it can never come up to the level to anticipate the phenomenal ability of humans to find a way to make something work, and then completely bin the entirety of the established norms. How about we just teach people how to drive properly? It's not that hard, I do it all the time, and none of my trainees has ever had a collision that was their fault since, for over 20 years. A computer can mess things up, it is humans who excel at making it unfixeable.

    A quote from 1982 TRON movie, in the laser bay:

    [Dr. Walter] Well, computers are just machines, they can't think.

    [Alan] But computers will be thinking soon...

    [Dr. Walter] Oh won't that be grand? Computers and machines will start thinking and the people will STOP...

    "If it is being done wrong, it is often being done by humans"

    It's terrible to think that cognitive ability is considered an inconvenience rather than a virtue. This is what happenes when you devalue education.

    1. Doctor Syntax Silver badge

      Re: AI seldom makes mistakes, people do

      "How about we just teach people how to drive properly? It's not that hard, I do it all the time, and none of my trainees has ever had a collision that was their fault since, for over 20 years. A computer can mess things up, it is humans who excel at making it unfixeable."

      Perhaps you should re-read this and ask yourself it the first sentence contradicts the second and, indeed, the entire thesis.

      Also ask yourself if, given, as you say, it's possible to train human beings to drive safely, why isn't it possible to build a machine that can be trained in the same way as a human with equally good results*. Because it's quite clear that that's not the way self-driving cars are being taught.

      * And, unlike a human, have the taught state cloned into all the other cars.

      1. David Hicklin Bronze badge

        Re: AI seldom makes mistakes, people do

        "Also ask yourself if, given, as you say, it's possible to train human beings to drive safely, why isn't it possible to build a machine that can be trained in the same way as a human with equally good results*. Because it's quite clear that that's not the way self-driving cars are being taught."

        I well remember what my driving instructor told me: He teaches me how to pass a driving test - it is only after you have passed that you actually learn how to drive.

  19. x 7

    Disengage!

    https://www.youtube.com/watch?v=bE9F5HvIkRQ

  20. Doctor Syntax Silver badge

    "Second was General Motors' Cruise. However, GM's fleet was involved in 22 collisions last year, and two already this year"

    Perhaps the better figure to have quote was the number of disengagements that oughtto have happened rather than those that did happen.

  21. Anonymous Coward
    Mushroom

    a loud alarm is sounded

    a] "human intervention approximately every 2.08km (1.3 miles)"

    means people would have to look up from their iphone and social media regularly

    b] "one disengagement every 9,005km (5,595 miles)"

    means people are asleep and will not wake up unless a loud alarm is sounded

    It's really like teaching your car to drive, with you in the passenger seat and the tech is the learner

    you have to be aware and focused ALL the TIME

  22. rtb61

    Standards Everybody

    Until there are a set of standards for auto vehicles they should absolutely not be deployed. Standards for what size object they can detect, at what distance and from all directions. Emergency manoeuvres, allowed or banned ie a large object appears in your path, do they remain on course and just brake even though an impact will occur or is evasive action taken, swerve off the road and brake but you run down a bunch of nuns and children at a primary school because they looked like bushes.

    How about servicing rules, repair anywhere, or the car automatically drives you to maintenance station because it is due and demand 10% of the vehicle price as an extortion payment or it will not return to the road ie a $2,500 oil change, because the auto drive was also checked, as it would be at every single service call, pay of their car, that they let you use, is not going anywhere.

    So will you car obey you or it's original manufacturer, hmmm, let me guess, of course it will obey it's original manufacturer, fuck you, who the fuck do you think you are, obey the corporations or they'll the car will deliver you to the nearest active train crossing, whoops.

    Standards people, rules to abide by, no standards, then they should not be on roads.

  23. Jtom

    This is one idea whose time has not yet come. There are too many times when an accurate interpretation of a situation must be made. On a NORMAL day, I must: stop for a school bus stopped in the opposite lane (are the cars programmed to recognize the difference between a school bus and regular bus?); cross a double-yellow line to go around a stopped mail-truck delivering mail (can a driverless car recognized the difference between a mail truck delivering mail versus one just stopped in traffic?); drive on the opposite side of the road because a flagman is re-routing traffic around construction; stop for a crossing guard in a school crosswalk.

    But that's not the worset. People are already ignoring their driving and doing other things. If the car is automated, they will completely ignore situations requiring intervention (even with alarms going off). Besides dangerous situations, they are going to be causing the mother of all traffic jams. People need to be more attentive, not lulled into more complacency.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like