back to article Ghost in Musk's machines: Software bugs' autonomous joy ride

Last year, a dark historical landmark was reached. Joshua Brown became the first confirmed person to die in a crash where the car was, at least in part, driving itself. On a Florida highway, his Tesla Model S ploughed underneath a white truck trailer that was straddling the road, devastating the top half of the car. Brown’s …

  1. cbars

    Easy fix

    Train the NN during the mormon cricket migrations in Idaho, that'll squash a few hundred thousand of the bugs

  2. John Robson Silver badge

    Really??

    "The Joshua Brown crash – driving at full speed into a clearly visible trailer – is arguably one such example as it “would never happen to a human being,” Hollander says."

    Has he never been on the road?

    There are *so* many cases of people driving into things that are perfectly visible (because there are very few things that aren't visible)

    And of course Joshua Brown is another of those - that the human behind the wheel didn't brake in response to the trailer either. You could more reasonably attribute the death to the lack of safety features required by law in US HGVs.

    1. Anonymous Coward
      Anonymous Coward

      Re: Really??

      I would also contest whether that it was a bug. It was clearly sub-optimal(!) but a bug is where something has been programmed incorrectly.

      In that case it just seems like it wasn't programmed to deal with that eventuality and relied on a different weighting of its sensors in that situation to define its parameters.

      If you have speech recognition, you would not call it a but if it didn't recognise every word or if it didn't recognise a certain accent. You would call it a bug if it recognised the phrase "call mom" but it actually dialled the emergency services.

      With a 'learning' system and pseudo AI there will always be scenarios where it won't be perfect - something like the Fairy Meadows Road is going to be almost impossible for an autonomous car to travel without being specifically programmed for that route, but it doesn't mean it is a bug.

      1. John Robson Silver badge

        Re: Really??

        "If you have speech recognition, you would not call it a but if it didn't recognise every word"

        I like what you did there ;)

        "I would also contest whether that it was a bug. It was clearly sub-optimal(!) but a bug is where something has been programmed incorrectly."

        That is also true, I've just got a vision of old film with people carrying a plate of glass across a road ;)

    2. Anonymous Coward
      Anonymous Coward

      Re: Really??

      You could more reasonably attribute the death to the lack of safety features required by law in US HGVs

      I doubt that. JB's two tonne car was doing 74 mph when it hit the truck, it'd be a very impressive side under-run bumper that'd stop that. Even if it had, to avoid a similar fate, the vehicle has to stop in about four feet - which means that even if the bumper, the car body, and the airbags spread the deceleration evenly during the circa 0.05 seconds of the impact (which I doubt) then the driver would be subject to a minimum of about 60 G.

      JB and his car had a part to play in his demise, but I'm unconvinced that a different trailer design could have saved him. However, the real root cause of this accident is the poor primary safety of US roads, often designed with uncontrolled flat 90 degree junctions on high speed roads (to save on the cost of alternative, safer layouts). These mix high speed through traffic with slow moving traffic crossing at right angles, and thus set up regular high risk conflict movements, regardless of whether a vehicle is self driving, or meatsack controlled. Anywhere in the world where there are this toxic (and cheap) mix of high speed and flat junctions, there's a history of high damage accidents. There's three choices here, all have nothing to do with self driving cars:

      1) Do nothing, live with the risks and consequences of a cheap road design.

      2) Pay to build or retrofit road layouts with better primary safety.

      3) Pay a bit less for controls such as traffic lights, along with more enforcement at flat junctions, and accept that there's still some risk, and a modest check on through traffic volumes and speed.

      1. John Robson Silver badge

        Re: Really??

        "You could more reasonably attribute the death to the lack of safety features required by law in US HGVs

        I doubt that. JB's two tonne car was doing 74 mph when it hit the truck, it'd be a very impressive side under-run bumper that'd stop that. Even if it had, to avoid a similar fate, the vehicle has to stop in about four feet - which means that even if the bumper, the car body, and the airbags spread the deceleration evenly during the circa 0.05 seconds of the impact (which I doubt) then the driver would be subject to a minimum of about 60 G."

        60g is survivable.

        OK, they have better safety harnesses etc, but F1 drivers often walk away from 50g crashes.

        https://en.wikipedia.org/wiki/Kenny_Bräck came away from a 200+g crash, and returned to racing...

        It wouldn't have prevented all injury, but it would have made a significant difference to the chances of survival (which were always ~0 without the bars). You take the collision down in speed over the first four feet and the A pillars would probably do more 'lifting' of the trailer, and get you even more deceleration time.

        You get to the point where survival is a possible outcome - and not a completely fluke one either. That's even ignoring the fact that having something of substance at that height would also likely have been sensed by the radar systems...

        1. Bronek Kozicki Silver badge

          Re: Really??

          It was one of many dramatic mis-application of existing code. Others are Ariane 5 or a bug which finished Knight Capital. The code itself was working according to conditions for which it was coded, but the software has been applied in conditions for which it was not intended, for example steering a much larger rocket, actual live trading or fully autonomous driving. There is not much blame you can put on coder, and a lot of it on the organization itself.

        2. Anonymous Coward
          Anonymous Coward

          Re: Really??

          60g is survivable.

          Its also an average across the 0.05 second time the car decelerates. At the first moment of impact, zero deceleration, zero G, the airbags have yet to be triggered, fired and inflate. Realistically the G it is going to spike to a much higher value. And in such a fast crash, if the car stops in four feet and 0.05 of a second, then by the time the airbag is fully inflated (say 45 milliseconds from the crash sensor being triggered to full inflation of the airbag), the initial impact is almost over. If it stops in five feet, the car's gone under more than half the trailer width and although the G force may be lower, the loadbed of the trailer's probably come through the windshield and connected with the driver's head as they flop forward on the seatbelt.

          You get to the point where survival is a possible outcome

          I don't dispute that side under-run bars ought to be mandatory. A quick looks supports my expectation that they offer protection up to 40 mph (Angelwing). A lighter car might be protected at higher speeds, but I'd be surprised if the kinetic energy of a two ton car would be stopped above 40 before the cabin is penetrated (look at the test pictures, and you'll see that at 40 a large car only just gets stopped before the A pillars get sliced). Now consider the two ton car in a perpendicular 74 mph impact - that's got 3.5x the kinetic energy of the same car at 40, so the impact is way beyond the design parameters of even a notably stronger than average under-run protector. The A pillars will never be strong enough to lift a trailer and buy more time. Look at the pics of the crash in question, and you can see that they left marks on the trailer, but clearly weren't able to lift it.

          1. John Robson Silver badge

            Re: Really??

            "60g is survivable.

            Its also an average across the 0.05 second time the car decelerates. At the first moment of impact, zero deceleration, zero G, the airbags have yet to be triggered, fired and inflate. Realistically the G it is going to spike to a much higher value. And in such a fast crash, if the car stops in four feet and 0.05 of a second, then by the time the airbag is fully inflated (say 45 milliseconds from the crash sensor being triggered to full inflation of the airbag), the initial impact is almost over. If it stops in five feet, the car's gone under more than half the trailer width and although the G force may be lower, the loadbed of the trailer's probably come through the windshield and connected with the driver's head as they flop forward on the seatbelt.

            You get to the point where survival is a possible outcome"

            Yes - but an F1 car comes to a stop from 150+mph in well under 4 feet fairly often (thankfully they are generally good enough drivers that it isn't *that* often, but it happens)

            No airbags, although better restraints/HANS devices etc.

            I'm not saying it's gone to a 'certain kill' to a 'will absolutely walk away from', but the chances of survival are dramatically better with than without.

      2. petur

        Re: Really??

        An under-run bumper would have shown up as 'not an empty space' so it very much would have saved his life, it would have triggered emergency braking

        1. Jellied Eel Silver badge

          Re: Really??

          An under-run bumper would have shown up as 'not an empty space' so it very much would have saved his life, it would have triggered emergency braking,

          That depends on the code. The description in the incident report makes it sound like a system primarily designed to prevent rear-end collisions. So a combination of camera & radar with a check against pre-defined vehicles. It doesn't say what it'd do if it detects an unknown posterior, or how much of the space ahead of the vehicle gets scanned. I'd hope it's the full profile of the car, but presumably didn't happen in this accident. So 'operator error' caused by the driver's inattention, and possibly over reliance on auto-pilot features that didn't exist.

          Under-run bumpers would probably help in other accidents, and perhaps the Tesla Truck will slap QR codes on it's sides so airbags can be deployed in it's cars.

      3. Muscleguy Silver badge

        Re: Really??

        Here in Scotland such junctions are not uncommon, with the added wrinkle of corners, blind summits, fog, blizzards, ice etc. What happens is there are a spate of bad accidents, lives lost. The media get on their hobby horses, campaign groups are formed, local government is lobbied to lobby central govt (Holyrood in the case of roads). Since we moved up here to Dundee end of '98 ALL the fast roads out, to Perth, to Forfar/Aberdeen, to Carnoustie/Montrose have had grade separated junctions installed (on and off ramps, a bridge of some sort.

        It is now much safer to drive at 70mph on a dual carriageway A-road in Scotland. Though when it advises you to slow, it might be a good idea to do that. Oh and the biggest, longest stretch of single carriageway on the A9 (Perth to Inverness) now has a long stretch of dual carriageway. Part of the project to dual the entirety of it.

        I'm not sure there is such a national or even state program to invest in roading infrastructure. With the Tea Party they are instead focussed on paying ever less tax and wondering why their infrastructure is falling down.

    3. Jonathan Richards 1
      Stop

      Re: Really??

      > the human behind the wheel didn't brake ...

      It was alleged at the time that Mr Brown was engaged in watching a movie on a tablet. He may not have seen the trailer at all. Of course, and if so, this was a fatal abuse of his vehicle, after which I believe Tesla stopped calling their software an "autopilot".

      1. Snowy Silver badge

        Re: Really??

        @Jonathan Richards 1

        As far as I can see they still call it "autopilot" when it should be considered more an advanced form of cruise control.

        1. Anonymous Coward
          Anonymous Coward

          Re: Really??

          Autopilot in an aircraft is just a more advanced version of cruise control (in 3 dimensions) sometimes with ability to change course at preselected points.

          Even most of the advanced aircraft autopilots will not avoid a white truck flying through the air and stopping across your path.

        2. fearnothing

          Re: Really??

          As I suggested to my colleagues, 'Supercruise'

    4. My Alter Ego

      Re: Really??

      That was my thought when I read that quote. From what I know the circumstances were that the side of the trailer was white and blended into the bright sky - something that can also happen very easily to humans. Anyone who's drive towards a low sun (especially during Winter with wet roads) will know what it's like to be overpowered by the glare. The M40/A34 junction at Bicester was a prime example - During the Winter there was almost a daily accident until they installed the traffic lights.

      This will sound awfully cold (and is no consolation to relatives), but autonomous driving will always be responsible for deaths. The question is, is it safer than us meat bags and according to Tesla (who are not exactly unbiased) it is.

      1. cream wobbly

        Re: Really??

        "but autonomous driving will always be responsible for deaths"

        Ahem. That case had nothing to do with autonomous driving. Rephrased, then: the driver who engaged cruise control and then took his eyes off the road to enjoy a movie was responsible for his own death.

    5. yoav_hollander

      Re: Really??

      Just to clarify, when I said the JB case was “arguably one such example”, I did not mean it in the sense of “people never drive into visible obstacles” – clearly they sometimes do. I meant it in the sense that autonomous vehicles bring with them new failure modes – in this case the failure mode of users trusting that the machine can do more than it was actually designed to do.

      That was in the context of "expected vs. unexpected bugs". Sorry if that was unclear.

  3. Anonymous Coward
    Anonymous Coward

    What did for Toyota...

    Their code was not shown to be defective or to fail. What they failed to do was follow a standard, adopt "best practice" or be able to provide adequate evidence to support any claim that they had done so.

    Courts accept that systems will fail. Being able to prove you have done your best to prevent accidents makes the difference between "only" paying compensation or punitive damages and penalties.

  4. Anonymous Coward
    Anonymous Coward

    Adaptive AI

    Should they have to go through "I still know what this is" tests once a year / whenever they boot?

    E.g. holding up picture of cat results in "that's a cat". Holding up picture of the side of a truck results in "I see a road"...

    1. Anonymous Coward
      Anonymous Coward

      Re: Adaptive AI

      NOT hotdog.

  5. Doctor Syntax Silver badge

    "People are not hiring from among the ranks of the airline safety industry."

    Of course not. They'd just fire them for being "unhelpful", "obstructive" or whatever other term comes to hand* when the techies point out the gap between company policy and reality.

    *"Sneering" is just the latest.

  6. Dr Stephen Jones

    What?

    “Neural networks train themselves, and this might appear to remove the possibility of human error.”

    They don’t and it doesn’t.

  7. This post has been deleted by a moderator

    1. SImon Hobson Silver badge

      Re: Modularisation sounds a damned good idea

      ... and this is apparently the real cause of the Fukashima disaster, where multiple heavily redundant and different method, fail-safe SCADA controlled systems, suspiciously failed ...

      Nothing suspicious about a system failing when a) doused with salt water which is inconveniently quite conductive, and b) deprived of power because the emergency generators have been submerged in salt water (which is as bad for the engines as it is for the electrics).

      Even a system not doused in salt water will fail when it's local battery power expires, which won't be long if you are trying to run anything like pumps - those things that are quite important for moving coolant around in designs of that era.

      One of the design features of the Westinghouse AP1000 design is the passive cooling which means you can flip the big OFF switch and walk away for a day or two while it cools down passively. After a day or two, the operator intervention required is to refill the emergency cooling water tank sat on top of the reactor. So had the Fukashima tsunami hit one of those, it's quite likely that the reactor would have been able to cool down without any containment breach - probably written off by internal damage, but nothing newsworthy to see.

      IMO the Fukashima incident shows the safety margins built into even 40+ year old designs. Considering that all the cooling and power systems were effectively destroyed by a tidal wave of electrically conductive water, they didn't fare too badly.

  8. Warm Braw Silver badge

    The solution is to modularise neural networks

    The solution is probably to bury a wire in the road - it seems a bizarre idea to want to replace human drivers with machines but leave in place the infrastructure that machines struggle to process.

    However, it is interesting that we seem prepared to accept a much greater degree of carnage provided it originates from people like ourselves. Speed limits, seat belts and alcohol-testing were all the subject of strong opposition despite burgeoning road fatalities. Yet if a professional driver causes an accident, there is an outcry. And as for a machine... Autonomous vehicles will have as much trouble negotiating the double standards as they will the road network.

    1. Chris G Silver badge

      Re: The solution is to modularise neural networks

      Presumably, various of the NN modules will require some part of other modules data at times, I can't wait to see what kind of conflicts arise from that.

  9. Anonymous Coward
    Anonymous Coward

    Another annoying trend...

    ...is the increasing tendency to mask actual hardware issues by issuing a software update.

  10. Nick Z

    Software testing is the key to knowing whether it works or not

    I'd say that automated testing of software functionality is the key to making sure that it works as intended.

    There is such a thing as test-driven development, where you write a test, before you even write any code to make the program pass this test. And of course, all of these tests stay in the program, so that every time you make a change in the program, then you run these tests again to make sure that you haven't broken anything that was working before.

    This is the direction software development needs to go. Because you can artificially create very rare program states that seldom happen in real life. And you can run it in such a rare state repeatedly, until you iron out all the bugs. This way, rare states become as common as any other states for development purposes.

    Test-driven development is actually how automated neural networks create their programs. But there is no good reason why it needs to be completely automated and left to the machines. Human developers should write tests for neural networks to increase their testing above and beyond what they do on their own automatically.

    Neural networks require a new discipline in software development. Which is writing automated tests for such networks to make sure they perform as expected.

    1. Outer mongolian custard monster from outer space (honest)

      Re: Software testing is the key to knowing whether it works or not

      Thats interesting but in short, it'll be transferring the primary source of bugs from the coder to the person who devises the unit test harness? So not much of a long term final answer to the issue at all really.

      I see packages that are shipped when they pass a test harness each release, and every release new and interesting bugs are found that the test harness doesn't cover. Really a test harness will only detect things you already know about and fixed from popping back up into your code base.

      See the commentard earlier who mentioned that some of the systems failing and killing people were working as designed, its just the initial design wasn't sufficient in scope or definition to catch the oopsie that lead to the accident.

      1. Nick Z

        Re: Thats interesting but in short...

        Testing is the basis of all science. That what the scientific method is all about. Science itself is a type of test-driven development.

        And that's why I say that testing should be the basis of software development. Because otherwise you end up with a hodge-podge of some science mixed in with a lot of beliefs, superstitions, and ignorance.

        Computer programs are a reflection of people's minds. And it's important to remember that people have a long history of all kinds superstitions, mistaken beliefs, and ignorance. The only thing that has helped people overcome such state of being is science and its method of thorough testing, before accepting any assumption or belief.

        1. Bronek Kozicki Silver badge
          Joke

          Re: Thats interesting but in short...

          Testing is great but don't you dare automated testing, because that takes you towards TDD and agile, away from the sacred lands of waterfall.

        2. tom dial Silver badge

          Re: Thats interesting but in short...

          It is beyond reasonable doubt that testing, and designing/writing the tests before code delivery (and by different people), is a Good Thing.

          Still, the tests will be designed and implemented by people who, generally speaking, are at least as imperfect as the software designer and coders, and inevitably will overlook things. That will lead to occasional misbehavior of machinery the software controls, and if the machinery is a software controlled car or truck, highway accidents.

          The quest for perfection is good, but we had best recognize that it probably is futile, and that the real question is whether these automatic vehicles will produce a lower accident rate than the human controlled ones they will replace. So far, it seems likely enough that they will.

          1. annodomini2

            Re: Thats interesting but in short...

            Whether it's testing or just development, all the behaviour of the system will be defined by requirements.

            If these requirements are incomplete or inadequate, you can test till the cows come home, but it won't find any bugs.

            The issue with Level 4/5 autonomy is the basic functionality is fairly simple, but the number of edge and corner cases out in the real world is huge!!! Millions upon Millions, only so many can be catered for and so there will be gaps in your requirements and testing will not cover these scenarios.

            The use of Neural Networks is an attempt to cover for the unexpected, but these systems will have limitations and we won't know what they are until we use them.

    2. Doctor Syntax Silver badge

      Re: Software testing is the key to knowing whether it works or not

      "There is such a thing as test-driven development, where you write a test, before you even write any code to make the program pass this test. And of course, all of these tests stay in the program, so that every time you make a change in the program, then you run these tests again to make sure that you haven't broken anything that was working before."

      It's a solution to Brooks' definition problem: is the product defined by the document or an actual example? It can be defined by the tests instead. Brooks' example which posed the problem was the first shipped model of the 360 which left undocumented data in registers after an operation, developers started writing code that used that data so that future models had to behave in the same way; that wasn't intended. If the tests define the product then what the test says happens has to happen but anything the test doesn't cover has to be taken as undefined.

      Which brings us to the problem of test-driven development for critical stuff: how do you know the set of tests is complete and correct?

  11. SVV Silver badge

    Autonomous vehicle software

    When it crashes, so does your car.

    Personally, having worked in software develoment for years I'd rather be in a car with a drunk driver than a self driving one. I'd onsider it safer, no matter what Elon "why do we have to keep reading about this guy's nonsense schemes" Musk says (or possibly because of it).

  12. Lysenko

    First person?

    Joshua Brown became the first confirmed person to die in a crash where the car was, at least in part, driving itself.

    I think not. Cars have been "in part, driving themselves" since cruise control and automatic transmission were invented and there have been fatal accidents attributable to such systems. In this case the cruise control might have been able to disengage itself but failed to do so - that's a big step up from incapable of disengaging itself and ignoring manual override or simply autonomously accelerating.

  13. Chris G Silver badge

    Here's a thought

    We could use special training techniques on certain individuals, when they complete the course we could call them ' a driver' . Then include regular update training to allow for changing traffic conditions and retest for ability and driving safety.

    You would not need any special programming or even need to input a destination, you could stop and look at the view on a whim, it would be a kind of automotive freedom.

    Alternatively you could buy a piece of not ready for use marketing hype called an ' autonomous vehicle.

    1. Throatwarbler Mangrove Silver badge
      FAIL

      Re: Here's a thought

      It all sounds good, but your cunning plan has a demonstrably high failure rate, and the so-called "driver" wetware is typically kept in service long past the point that it remains safe.

  14. /dev/null

    Can't see it ever happening...

    ...until you can trust a self-driving car not to say "you have control", when it decides it has no idea what is going on and you're 2 seconds away from colliding with something. And if you can't trust it not to do that, then you might as well drive the damn thing yourself.

    1. Anonymous Coward
      Anonymous Coward

      Re: Can't see it ever happening...

      Personally I'd prefer it hit the breaks 2 seconds away from the collision rather than spending those 2 seconds telling me I have control.

  15. Jonathan Richards 1
    Big Brother

    Who owns the camera feed?

    from TFA:

    > In theory, the more miles autonomous cars clock up, the more data they will have to learn by, and the safer they will be.

    I want to know if there will be a record of the autonomous driving sensor feeds, and what will happen to them. I think the answer to the first part is almost certain to be "yes", since otherwise there will be nothing to help with crash investigations.

    If the answer to the second part is "they're streamed or uploaded to Google | Tesla | Uber | Dept for Transport | ... " to assist with autonomous car development, then I'm much less happy.

    FWIW, I can't see myself ever driving (or giving control to) an autonomous vehicle, and I don't look forward to sharing the road with them.

  16. nagyeger
    Facepalm

    could set off on the right hand side of the road

    Been there, done that.

    I comes from just having spent ages driving on the wrong side, and thinking "O great, I'm home now, and can relax."

    Fortunately I was on my bicycle, so while I and the oncoming car were semi-shocked into a state of utter confusion about what on earth the other was doing on the wrong side of the road, it wasn't too hard for him to actually avoid me.

    1. Seajay#

      Re: could set off on the right hand side of the road

      I've done that too. Drove perfectly happily abroad then came home and pulled away from a t-junction on the wrong side of the road.

      It's odd that is used as an example of a "autonomous vehicle only" bug when it's such a common thing for meatsacks to do.

  17. Anonymous Coward
    Anonymous Coward

    Having worked in automotive software for an independent company, we found it very difficult to win bids with OEMs that included safety critical software. We found on the whole that they would find a another company willing to do the job for 1/10 of the price, but (as we found out on several occasions) skimped on the software quality. The OEMs appear not willing to pay for high quality code.

  18. Throatwarbler Mangrove Silver badge
    Holmes

    I have the solution

    Autonomous vehicle companies should recruit exclusively from the ranks of Register commentards, who, based on the contents of their commentary, never make programming blunders, are expert at all kinds of programming, and have a flawless knowledge of business execution as well. Problem solved!

    1. Doctor Syntax Silver badge

      Re: I have the solution

      " based on the contents of their commentary, never make programming blunders, are expert at all kinds of programming, and have a flawless knowledge of business execution as well."

      On the contrary, we're well acquainted with what can go wrong. That's why at least some of us hope not to ever find our lives entrusted to autonomous road vehicles.

    2. Stoneshop Silver badge
      Pirate

      Re: I have the solution

      and have a flawless knowledge of business execution as well.

      Yup. Guillotine, AK47, Browning machine gun if you want to take out the entire C*O bunch in one go.

  19. Mark 85 Silver badge

    Car manufacturers contacted by The Reg were unwilling to talk.

    A Reg request for clarification from Tesla went unanswered.

    I find these two bits from the story to be very troubling.

  20. DougS Silver badge

    There are techniques to greatly reduce the bugs in code

    They are used in many life critical industries. However, car companies like Tesla et al who make unrealistic promises about how soon autonomous vehicles will be available want to race to the finish so they can begin making money off them. I guess they figure all the extra profit from being early to market will pay for a lot of high powered lawyers to get them out of having to pay wrongful death judgments.

    While this can't completely eliminate bugs, at least we wouldn't have to live with several million bugs in an autonomous vehicle!

    As for neural networks, I think trusting that for a life critical system is a terrible idea. At least with traditional programming you know what the code does, you can do coverage tests and formally verify critical sections. With a neural network you don't really know what it is doing, so insuring it will act appropriately in a given situation is difficult at best.

  21. Doctor Syntax Silver badge

    "They are used in many life critical industries"

    Even so, are they used in a situation even an order of magnitude less complex than an autonomous vehicle can encounter.

    1. DougS Silver badge

      Aircraft avionics are an order of magnitude less complex? Even fly by wire stuff that's aerodynamically unstable like most stealth aircraft?

      If you still maintain that autonomous driving software is an order of magnitude more complex, that's MORE reason not use to typical shitty 'write quickly, release quickly, fix bugs in the field' programming strategies, not less.

      Seriously, with as often as Tesla is delivering software updates, how much testing and verification can it really be going through? That's not a problem today, if people are using "autopilot" as intended and not as an autonomous system. But they claim they're going to slowly upgrade at least some models to full autonomous operation in the future. Do you really want to see them sending out frequent software updates to that, which probably get less testing than Microsoft gives its Patch Tuesday releases?

      1. SImon Hobson Silver badge

        Aircraft avionics are an order of magnitude less complex? Even fly by wire stuff that's aerodynamically unstable like most stealth aircraft?

        Yeah, I'd go with that. Even your inherently unstable airframe is fairly predictable and can be modelled in advance. Even full flight management where the pilot can line up on the runway, press the button, and do nothing but keep an eye on things till the nosewheel is bumping along the centreline lights at the destination is relatively simple.

        This is all because in aviation there is inherent separation - as you crise along the airway, there won't be an artic pulling out of the sideroad and across in front of you. Air traffic control, and as a backup, TCAS, should take care of that. The business of making a car drive along a computed path at a given speed is almost trivial - the complexity is in determining what that path and speed should be in the presence of random other road users (doing random things) and furniture.

        Just the process of seeing and correctly identifying another user (say a cyclist) is probably more complex than the entire system on a typical aviation system.

        If you still maintain that autonomous driving software is an order of magnitude more complex, that's MORE reason not use to typical shitty 'write quickly, release quickly, fix bugs in the field' programming strategies, not less.

        Now that I can agree with.

  22. bep

    Fly by pants

    Fly by wire aircraft aren't whizzing past dozens of other fly by wire aircraft less than half a metre away every minute or so. It's a very different situation and not very comparable in safety terms. The reason I'm far more willing to trust myself to other meat packets in control is based on experience. I'm sure if you drive all day most days you are far more likely to be the victim of a mistake by another driver, but if you only drive occasionally on mostly familiar routes your chances of having an accident are much reduced. The problem with this self-driving deal is that it is far more random and you can't really calculate your risk. I'd be terrified to drive on a road full of self driving cars at this point in time.

    1. DougS Silver badge

      Re: Fly by pants

      Well I'd hope you'd be terrified to drive on a road full of self driving cars today, considering they are years away from being ready for fully autonomous operation. The only question is whether years < 5 like Musk and other foolish optimists believe, or years > 10 which seems a much more reasonable and prudent bet.

    2. Anonymous Coward
      Anonymous Coward

      Re: Fly by pants

      " I'm sure if you drive all day most days you are far more likely to be the victim of a mistake by another driver" "Professional" drivers are some of the most dangerous drivers you will encounter on the roads. If you drive all day most days you get over confident, it becomes normal, you stop paying attention as much as you should, leading to you causing the accident. Most "accidents" happen because whoever is behind the wheel does not give it the attention it deserves.

  23. John Smith 19 Gold badge
    Unhappy

    So the testing problem is to think like a computer, thinking like a human driver

    and work out where it's making conceptual errors.

    Hmm..

    So I guess they'll need to pair up a couple of developers, one with Aspergers, one without for both views of the code.

  24. Nick Z

    Testing is a lot easier than creating the original program

    It took Einstein to come up with the Theory of Relativity. But plenty of ordinary physicists have devised tests for this theory and have tested it thoroughly.

    The same is true for creating a computer program. Machine learning can create a very complicated program. But you, as a human, can create all kinds of tests for it to determine how it will perform in various circumstances and perhaps add to it some human-written code to correct its flaws.

    There is no reason why computer programs have to be either completely done by machine learning or written by humans. The best result is when you have a combination of both.

  25. Liam Proven

    If software is vulnerable to errors such as buffer overflows, then it's written in a C derivative. That might be acceptable for OS kernels, but not for safety-critical code.

    Here's a thought. Use a language designed for safety-critical situations. https://www.gnu.org/software/gnat/

    1. John Smith 19 Gold badge
      Unhappy

      "If software is vulnerable to errors such as buffer overflows,..it's written in a language"

      FTFY.

      Yes SPARC and Gnat (both Ada derivatives) are more secure but IRL you can write C in any language.

      It's a question of how much effort you have to make to circumvent interior checks.

      But note.

      Sooner or later your HLL will have to talk to the H/W, and what's inside those libraries will probably be written in assembler (unless your process is essentially a compiler back end in hardware, like a Java machine) which won't be subject to the same level of security.

      The automotive industry does have C coding standards (MISRA) that are compiler neutral. There are a set of 24 requirements IIRC (and the Toyota auto throttle mishap happened because they didn't follow them)

      It comes down to wheather a mfg is willing to follow them, puts in place mechanisms to ensure they are followed, and understands what happens if they are not.

  26. Jay 11

    The comment in the article concerning cars starting to drive on the right interests me.

    Years back I emailed Google asking how their cars software would handle motorcycles filtering in the UK as opposed to the US where filtering or lane splitting is illegal in many states, they didn't answer my question asking only if I was a journalist.

    I suspect that there will be a lot of problems caused by coders coding for their local environment and legislation that when put into a different localisation will cause problems, that when someone recodes this could cause a clash with something else.

    1. Seajay#

      Worst case they'll handle it in the same way that an American driving in the UK would do.

      I.e. "Woah! I wasn't expecting that biker to do that but I'll avoid driving in to her anyway." with the added bonus that unlike humans who are only paying full attention when they expect something potentially dangerous to happen, the autonomous car will be paying full attention all the time.

      More likely they'll do a whole load of engineer-sitting-in-the-car-with-his-foot-hovering-over-the-brake miles in each new environment before selling the car there.

  27. DrM
    FAIL

    ASIC

    .. developers from Silicon Valley whose backgrounds are in general purpose software – software that, of course, crashes with reasonable frequency.

    Am I the only person that thinks SW sliding into pure unreliable crap is a global problem in all parts of our lives? Or I should just get used to ASIC, All Software Is Crap?

    Pathetic.

  28. Seajay#

    WONTFIX

    Humans have this bug where they prioritise high-likelihood, low-reward immediate gratification tasks (like checking their text messages) over medium-likelihood, very-high-reward tasks (like paying attention to the damn road and not dying).

    Has anyone got contact details for the devs?

  29. unwarranted triumphalism

    Careful now

    Criticism of The Holy One is not allowed on this site.

  30. Andromeda451

    Airbus and autonomous cars

    So Airbus has a highly automated cockpit and they're finding that pilots get "lazy" and their skills at flying the plane degrade. Air France flight 447 crashed into the sea because the pilots failed to "fly" the aircraft. The Tesla system will cause driver complacency (as we have seen) and drivers, who are almost never trained to the levels required of pilots, will continue to crash. The gentleman killed was expected to be a "monitor" of the system. Unfortunately, we have seen how that really doesn't work in the real world. Now add to the mix auto manufacturers racing to get product out the door with a "good enough" mentality and we will see continued deaths and mayhem on our highways. When a team doesn't do the basics (like good product requirements) the results will be less than stellar.

    1. SImon Hobson Silver badge

      Re: Airbus and autonomous cars

      Air France flight 447 crashed into the sea because the pilots failed to "fly" the aircraft.

      Not exactly.

      They DID fly the aircraft, but due to a variety of factors flew it incorrectly. Somewhat oversimplifying ...

      One pilot was a bit confused about the situation (IIRC the pitot tubes were iced over and they lost airspeed information) and held his stick back to "pull the nose up" and arrest what he thought was a dive. The other pilot identified that they were in fact in a stall and pushed his stick forward. Because the sticks aren't linked, he was unaware that his colleague still had the stick fully back - and because of this, he was unable to lower the nose and recover from the stall after which they could have levelled out and continued flying.

      From memory, the design of the side control sticks (think in terms of having a control stick where the electric window switches are in most cars) and the fact that there is no cross linking (there's no mechanical feedback in either stick for what the other pilot is doing) came in for particular note in the accident report. In more traditional control designs, the primary controls are mechanically linked which means that both pilots have direct feedback of what the other pilot is doing.

      Where they failed to fly the aircraft is in that loss of airspeed information shouldn't be a big deal - just set the engine power and attitude found from the charts in the manual and it'll fly level, that is part of basic flight training. What happened here was that the complexity of the systems isolated them from the basics, and that together with lack of practice at hand flying meant that they didn't grasp what was happening ... soon enough to fix it.

  31. Rebel Science

    There are two problems: software unreliability and the brittleness of deep neural nets

    Software unreliability is proportional to complexity and is a direct result of our current computing paradigm which is based on the algorithm. The solution is to stop using the algorithm as the basis of programming and adopt a signal-based, reactive programming model. Essentially, software should work more like electronic circuits.

    The second problem is that, in spite of the loud denials from the AI community, their biggest success, deep learning, is just GOFAI redux. A deep neural network is actually a rule-based expert system. AI programmers just found a way (gradient descent, fast computers and lots of labeled or pre-categorized data) to create the rules automatically. The rules are in the form, if A then B, where A is a pattern and B a label or symbol representing a category.

    The problem with expert systems is that they are brittle. Presented with a situation for which there is no rule, they fail catastrophically. Adversarial patterns prove this in neural nets and Tesla Motors found out about it the hard way. The car's neural network failed to recognize a situation and caused a fatal accident. This is not to say that deep neural nets are bad per se. They are excellent in controlled environments, such as the factory floor, where all possible conditions are known in advance and humans are kept at a safe distance. But letting them loose in the real world is asking for trouble. Obviously, we will need a better solution.

    Here are a few relevant links for those who care:

    Why Software Is Bad and What We can Do to Fix It

    The World Is its Own Model or Why Hubert Dreyfus Is Still Right About AI

    In Spite of the Successes, Mainstream AI is Still Stuck in a Rut

    Why Deep Learning Is A Hindrance to Progress Toward True Intelligence

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019