Even in the extremely unlikely event that fully autonomous vehicles ever become viable
I'll still be driving myself thanks.
On September 8th, 2015, a pilot left Point Cook Airfield in the Australian State of Victoria for a solo navigational training flight. She didn’t make it back: the plane “impacted rising terrain” about two-thirds of the way into the journey and the Australian Transport Safety Bureau report on the accident, published today, …
Alan - you might not be allowed to. I've been to a couple lectures recently on autonomous vehicles and the point came up time and again, in both technical and psychological contexts, that the benefits of autonomous vehicles are best realized when all the vehicles on the road are autonomous and cooperating. There were discussions/speculation about road space for non-A vehicles being reduced or restricted.
I'm firmly in the "... drag the keys out of my cold, dead, driving-gloves-with-little-holes-clad hands " camp, but when the only route from A-B for old gits like me is on B-roads with 20mph limits then there might not be much of an alternative
If we work on a couple of (big) assumptions, your self-drive car may be very expensive, and for most people could become a luxury. Assume a basic car costs about the same - Most of them will almost certainly be electric with a realistic range of at least 200 miles and an average journey distance of <20 miles, and very much more reliable and cheaper to run, except for the battery which will be replaceable. The service life of the vehicle may be much longer. If we assume that most vehicles are currently used for at most 10% of the time, and they spend most of their time parked somewhere (at home or work), so we could consider that 5 people can use the vehicle, which will come to them, and they don't have to park it; the economics of car ownership change dramatically. Insurance is much cheaper, fuel costs are lower, you don't need to park, and you don't need a garage at home. Cities will need fewer roads and almost no parking areas. If we also assume that more work will be done remotely, the need to travel to and from work will also be reduced. Would you pay more than £5,000 a year for a car when sharing an autonomous vehicle could cost less than £1,000?
Unfortunately some of us need to visit several sites a day with a boot full of spares and tools to fix 'stuff'.
I'm all for having it drive me around but doubt it could find half of the sites that I visit let alone navigate what could be a building site / underground car park / security checkpoint... I don't fancy getting shot for not stopping in time or not turning lights off etc
At least it could drive me home from the pub :D or will that not be allowed ?
"Would you pay more than £5,000 a year for a car when sharing an autonomous vehicle could cost less than £1,000?"
Would you spend more than $CURRENCY5,000/yr on a private television/computer room when a shared television/computer room could cost less than $CURRENCY1,000/yr?
Would you share a bath/toilet with several other families to save a couple grand per year? How about a kitchen? Lots of economy of scale sharing kitchens! Have you SEEN the price of a good steam-injected bread oven lately?
My £5,000 to £1,000 was a poor illustrative example. If the autonomous car does happen, it will probably be made in China by someone you have not heard of, and the cost is of a "normal" vehicle is possibly more likely to be >£10,000 p.a.
I remember the start of a previous major disruption, the mobile phone - Initially only very few people had them, I was working in technology and bought my first one ~25 years ago, now almost everybody has them - The are often rented on a plan at perhaps £300 -£1,000 a year and, whether we like it or not, have radically changed society. The autonomous car (if it happens!) will cause a bigger change.
Your TV room example is poor, initially they were in a shared room (with your family) and many people did rent TVs; now they are so cheap that most of us have more than one, and the young might use their smart phone anyway. The cost of a private bathroom/toilet (which for most of us is shared within the household) is much less. The shared kitchen is becoming a reality for the urban young because they are starting to use their mobile phones to order meals from "dark kitchens" and many do not cook for themselves (I don't count a microwavable meal as cooking) - Another, perhaps, unforeseen product of the disruption caused by mobile phones. I know several young urban dwellers who don't have a car, and use Uber, again another disruption caused by the phone...
I did not say I liked the idea of the autonomous car, but if we survive the next 20 years (I won't be around then), it is inevitable - Moores Law generally applies to almost all technology.
"I'll still be driving myself thanks."
There will come a day when (say) 50% of the cars on the road are self-driving, and 50% are driven by drunk, tired, angry or otherwise inattentive-sometimes humans. And that year, it will be pointed out that although 50% of the cars on the road are human driven, those 50% are responsible for 99.9% of the deaths. And at that point the argument for making driving your own car on the public road illegal will become unanswerable. You'll still be able to do a track day or drive round the farm or the grounds of your stately home, but on public roads driving your own car will rapidly become viewed as violently antisocial insanity. I've been saying for a few years now - by the time my (so far unborn) kids are old enough to learn to drive, they won't need to bother, and by the time THEIR kids are old enough, it'll be against the law.
We shall see. First, the self-driving cars have to be reliable enough to get approved. Given the story is that Uber's could only manage an average of 13 miles autonomous driving, before requiring human intervention, that's got a while to go.
OK, Google seem to do much better. But then their car is using lots of very expensive lidar and radar sensors. So the next hurdle to achieve is affordability. They can get away with being more expensive, and leased, seeing as they can wander off and work for other people while we're not using them. Some sort of cross between taxi and car share seems viable. But that's probably still a way off.
I'm not sure I buy the self-driving car hype quite yet. Like a lot of current news about AI and Big Data - there's a lot of truth, a bit of theory and quite a lot of wishful thinking and marketing bullshit. Computers won't replace all the lawyers, accountants and office workers in ten years time and self-driving cars won't have taken over by then either.
Some sort of cross between taxi and car share seems viable
Which sounds really, really unattractive, based on my experience of hire cars, hire car companies, taxis, and second hand cars.
Not to mention the fact that I want to treat the interior of my car as personal living space, so other users may have similar misgivings.
I suspect you're right. Which makes autonomous cars even further away. They'll start off very expensive, and possibly hard to insure. But if you own a fleet of thousands, then you can self-insure.
But this will give an inferior service to ownership, at least in some ways, and so will have to be cheaper. Which means utilisation will have to be high, in order to cover costs. Or it'll have to be a loss leader to attract customers, and hope to make profits later, once volume brings the price down.
That's easy. They'll charge more for peak travel. The school run, and run to the office. Then you'll be able to hire them cheaper during the day - which may well mean that current 2 car families can drop down to a single car and a monthly hire fee or something. Also maintenance can be done during working hours - leaving more of your fleet available for peak travel.
Some people will happily pay more for their own car.
As for the dirty issue, why not a robot vacuum cleaner for a robot car?
The car companies are looking at this as a way to keep making money if automous cars do come off, and (another big if) if that then leads to more people car-sharing. Because if both happen, they'll sell many fewer cars, and lose some economies of scale.
But it's both a big social and a big technological change. And those often take longer.
"But this will give an inferior service to ownership, at least in some ways, and so will have to be cheaper. Which means utilisation will have to be high, in order to cover costs"
Which is trivial when the cost of a 20 km taxi ride is on the order of $40, and a 4 km transit ride is a bit over $3.
The first, and in many ways the best fitting, use case will be replacing taxicabs, followed by replacing buses and streetcars. After that will come supplementing inter-city transit for low volume or off hour service.
All of these involve high use rates (good for amortizing fixed expenses) and benefit from removing the major operating cost (the driver) while avoiding limitations such as driver hour regulations.
And a door to door replacement for transit will benefit lots of people who may not be able to use regular transit or afford lots of money to support state imposed taxi monopolies.
Very much +1. I can barely remember the time before I could drive - it's nearly 50 years since I passed my test - but my wife learnt much later in life, and she tells me that one of the things that she felt very strongly on passing her test was that if everything went pear-shaped, she could live in her car if necessary. Cars aren't just transport, they tend to become very much part of our personal space.
"Computers won't replace all the lawyers, accountants and office workers in ten years time and self-driving cars won't have taken over by then either."
A really *exciting courtroom scene impending is one self-driving car co suing another self-driving car co for damages related to the "I had more active, on-board sensors" defense.
"Some sort of cross between taxi and car share seems viable."
What do you want out of a car? A reasonably clean vehicle available when you want it? With your own car the degree of cleanliness is what you decide is what's worth putting in the effort and availability is assured by not competing with someone else for the vehicle. Can you guarantee either with the taxi/car share model, especially if you want the car to go to work in at the same time as most other people?
I'm sure that manually driven vehicles will eventually be legislated off the road too. I'm just waiting for the right person to realise this and add self driving capabilities to a vintage steam roller or traction engine...
Legislating in favour of limited self driving cars and against other road users means that roads become, in effect, railways. No bikes, no motorbikes, no horses, no pedestrians, no police cars, no ambulances, no fire engines, no delivery vans, etc. And they'd likely require fences to keep wildlife off the roads.
Not very viable, politically speaking.
Of the few people I knew working on self driving car tech, not one of them has the first idea as to how a Level 5 car could be done. Even our unmanned trains rely on a manned control centre - not a scalable solution.
roads become, in effect, railways.
Already heading that way.
As soon as the policy makers realise how bloody hard fully autonomous vehicles are, they'll revert to a combination of the technology above, tied in with a version of the technology used for guided busways. Find me a transport bureaucrat, and I'll find you somebody who pleasures themselves over stuff like this.
"Not very viable, politically speaking."
To say nothing about the fact that there are, by some estimates, some 50,000,000 vehicles on US roads that are over 40 years old. These aren't old junkers, these are carefully maintained family heirlooms. They are driven daily, both for utility and for fun. Outlawing all these vehicles would alienate a LOT of voters.
Manually driven over-the-road vehicles will be with us for at least another century, and very probably much longer. I suspect that any politician who tries to change this will be tarred & feathered and run out of town on the rail.
"add self driving capabilities to a vintage steam roller or traction engine..."
I thought about adding radio controls to my 1915 Case (throttle/brake, forward/reverse and steering). But then I realized I'd have to stay at her controls anyway, in order to operate valves, monitor the fire, and all kinds of other little bits & bobs that go along with driving such a contraption. To say nothing of the fact that it would add some seriously ugly parts to a perfectly beautiful machine ... Needless to say, I shelved the idea before turning a single nut.
"And that year, it will be pointed out that although 50% of the cars on the road are human driven, those 50% are responsible for 99.9% of the deaths."
That makes an assumption as to the relative driving abilities of self-driving vehicles vs tired and drunk humans. That remains to be established.
I've been saying for a few years now - by the time my (so far unborn) kids are old enough to learn to drive, they won't need to bother, and by the time THEIR kids are old enough, it'll be against the law.
More likely the entire infrastructure will have collapsed, and the "self-driving" vehicle will be a horse-and-cart. Mainly brought about by crushing regulation, rampant litigation, API and patent trolls making ANY level of development and interoperation outright impossible, etc. And everyone will be too busy looking at cat videos to notice it happening, right up until the day the web goes black.
Only red ones? Mine are BRG.
However, it raises an interesting point(s) ... When the Sun throws us a CME, how many of these fancy computer-controlled vehicles will still run? How many will be scrapped by the insurance company? How long will it take the vast majority of folks world-wide to regain their "normal" transportation?
Somehow, I suspect my '65 Sunbeam Tiger and '69 F-250 will be on the road far sooner than my neighbor's '17 Tesla and '18 Cadillac Excursion. And the farm trucks & tractors (all diesel, with mechanical fuel pumps) will probably not skip a beat.
 When, not if. Are you ready?
"Somehow, I suspect my '65 Sunbeam Tiger and '69 F-250 will be on the road far sooner than my neighbor's '17 Tesla and '18 Cadillac Excursion. And the farm trucks & tractors (all diesel, with mechanical fuel pumps) will probably not skip a beat."
Until the fuel in the tanks is consumed. After that, they're as useless as the Tesla and the Caddy. Your car is a system, most of which you don't own or control.
"Are you ready?"
Pumped. Bicycle tires, I mean. All set.
but on public roads driving your own car will rapidly become viewed as violently antisocial insanity. I've been saying for a few years now - by the time my (so far unborn) kids are old enough to learn to drive, they won't need to bother, and by the time THEIR kids are old enough, it'll be against the law.
I highly doubt that. For starters, there will still be cars on the road 40 years from now that were built before self-driving cars were a thing. Not many mind you, but there will no doubt be some just as there are still people driving around in cars made in the 1960s today. Around here it's not even unusual to see a mid-60s model muscle car in the parking lot at the local grocery store. In fact, given the type of person who drives them, it's likely that a lot of those cars will still be running up until their current owners are too old to drive. Even in the unlikely event that autonomous drive becomes mandatory there will be holdouts in older vehicles. Just like seat belt laws, autonomous drive laws will not apply to cars that don't have autonomous drive.
Second, we're at least a couple generations from people really being completely comfortable with autonomous cars. More than a couple if the robot apocalypse genre continues to be popular in the future. Too many people actually think Terminator is a realistic scenario.
Third, software is glitchy. Every time some car manufacturer issues a bad update or a car gets hacked - and make no mistake, both will happen on occasion if autonomous cars are widespread - it will be a reminder that computers are not 100% trustworthy.
Yeah, autonomous cars will probably - quickly - get to the point where they're safer than a human driver. But that won't matter. Just as flying is much safer than driving and people still get nervous flying, autonomous cars are going to make the average person nervous for a good long time.
In my admittedly optimistic view there is room for a happier alternative - wherein the increasing disparity between human frailty and robotic reliability leads to higher certification standards for human drivers. Still, I might not be able to afford the insurance as a human driver.
... and crashes because the autopilot didn't disengage.
AFAIK in big planes if you apply enough forces to the controls the autopilot disengages automatically (but there could be exceptions, i.e. throttles) , but it doesn't happen in small ones with less or no fly-by-wire systems and sensors.
Both led to crashes - it's a matter of situational awareness - if you're distracted/overloaded and lose it, it becomes dangerous - or fatal.
It's not an easy decision, anyway, who should override who and when. Again, there were situations when pilots taking control would have been the right decision, and others when leaving the autopilot control the plane would have been the right one.
Both can have the wrong inputs and take the wrong decision. Anyway, pilots still have (or should have...) the proper training to take control - with autonomous cars, will the user still be required to have driving skills?
That confirms what I was about to ask then. Which was that I thought pushing the yoke would disable the autopilot and give the pilot control. After all, there might be times when you see another plane late, and want to be moving the stick quickly, without having to reach for the off switch first. I didn't realise small planes operated differently.
This is a bit like the Air France flight 447 crash. Where the aircraft was "averaging" the inputs of the two pilots - whose cockpit discipline had broken down and were both trying to fly the plane at once. This is a situation that can't be allowed - and the automation shouldn't allow. Only one person (computer) can be flying at once - and even if they're doing it wrong, it's still unlikely to help if there are two simultaneous sets of inputs. Then nobody knows what's happening. And because of that, it becomes much harder (to impossible) to correct that intial error.
@ I ain't Spartacus
"This is a bit like the Air France flight 447 crash. Where the aircraft was "averaging" the inputs of the two pilots - whose cockpit discipline had broken down and were both trying to fly the plane at once"
This is not really true. One of the pilots caused the crash with a consistently incorrect control input for over 3 minutes. There was a time period when thr other pilot had a good control input but the problem wa sthat one pilot held teh controls in a completely in appropriate position for minutes despite (not continuosly present) appropriate warning messages from the aircraft and despite his training. It can't really be blamed on the averaging.
There is an argument that it was due to automation but a much more subtle one. Normally the pilots error would have been handled by the aricraft as it prevents a stall. However there had been a fault in the sensors which meant this level of protection was disengaged but the aircraft was still perfectly flyable. Some have speculated that the pilot concerned had become so used to the protection provided that under stress he defaulted to behaviour which was only safe if the protection was in place. I suspect something like this may happen with automatic cars that humans are put in place as fallbacks when the car goes wrong and therefore blamed by accidents caused when the automated systems fail unexpectedly putting the human driver in a dangerous situation with little or no warning and low situational awareness.
I disagree. Obviously the biggest cause of the crash was that pilot losing situational awareness and stalling the plane.
Training and discipline also broke down - given that both pilots had hands on the controls. Not helped by that model having side-sticks, so it's much harder to notice what the other pilot is doing.
But the controls of the plane are also badly designed. Because averaging the inputs is completely fucking pointless. The plane can't know which of those two inputs is correct, so what it should be doing is complaining about it, locking the controls and doing neither - or just doing one - and disabling the other stick. Or you have connected yokes, so it's obvious. Silently averaging them means that nobody now knows what's happening - and if one pilot is correct you've turned a 50/50 chance of him getting control and saving the day into a 100% chance of failure.
Obviously it's also a problem that we've trained pilots for fly-by-wire that won't let them cause stalls - and not trained them enough on the failure modes of fly-by-wire where that's no longer the case.
So it also seems to me that sidesticks are possibly a bad idea - and you want a physical yoke - because that way you can physically see and feel what the aircraft controls are doing - and that means the non-flying pilot has a better chance to work the problem. Interfaces need to be as simple as possible, as yet anther warning alarm will get ignored under the consistently running stall alarm, which I seem to recall alternated with an overspeed alarm.
"It can't really be blamed on the averaging."
No, it can and should be blamed on averaging. Averaging two similar inputs is appropriate. Averaging two very different or even opposite inputs is clearly a mistake. This was an error in the system.
Because the system was designed and approved by people who were later asked to decide who was to blame, they decided to blame the pilots. But they shouldn't have been allowed on de judges seat, they should have been in the booth of the accused.
It seems we humans are simply not capable of building complex and robust systems.
Yet non-techies, like politicians, managers and other fools, are very impressed by complex systems, probably because they are very expensive.
"This is a bit like the Air France flight 447 crash. Where the aircraft was "averaging" the inputs of the two pilots - whose cockpit discipline had broken down and were both trying to fly the plane at once. This is a situation that can't be allowed - and the automation shouldn't allow."
This really is a no-brainer. One of the pilots is called the captain, the other is the called the co-pilot.
The software must be designed by morons, there is no other logical explanation.
Find them, throw them out of a flying helicopter.
You will see the quality of the software increase dramatically.
"One of the pilots is called the captain, the other is the called the co-pilot. The software must be designed by morons, there is no other logical explanation."
In air accident reports they are called 'the pilot flying' and the 'pilot not flying' or similar terms.
Cockpit management assigns those roles for various reasons, and often the 'captain' is not flying the plane.
In an emergency there are good arguments that the 'not captain' should be flying the plane (physical skills) while the 'captain' should be figuring out what the f*ck is going on and the best way to recover from that (knowledge, mental skills, experience). More planes are lost to bad problem analysis and inappropriate recovery procedures than to an inability to move a control column.
Who flies the plane should be decided by the commander of the aircraft, not software.
"Which was that I thought pushing the yoke would disable the autopilot and give the pilot control. After all, there might be times when you see another plane late, and want to be moving the stick quickly, without having to reach for the off switch first."
I believe this is the approximate case, with later systems being more nuanced.
I seem to recall reading at one time that Boeing disengaged the autopilot completely, while Airbus dropped into an 'alternate law' mode. Thus the Boeing pilot had total control and was responsible if control inputs broke the airplane or caused loss of control, requiring the pilot to deliberately moderate control inputs, while the Airbus pilot could demand maximum maneuvering, leaving the computer to ensure that the aircraft would not fail structurally, or go into an uncontrollable state. Sort of like old mechanical power brakes versus ABS brakes.
In modern fighter jets the computer is never wholly out of the loop, as those planes cannot be flown without computer assistance, but I have no idea what the policies / law structure might be.
"…pilots still have (or should have...) the proper training to take control…"
This almost sums things up. I'd add: it seems you can fly a plane only if you have a complete understanding of how control surfaces make it possible to fly it, but you can use the automata within without understanding how that works.
"Anyway, pilots still have (or should have...) the proper training to take control - with autonomous cars, will the user still be required to have driving skills?"
One might expect that a vehicle will work for any authorized person (paying for a trip, owning the vehicle), but non-autonomous mode will require a licence to drive and a key, token, or code to enable manual mode, or verification of licence possession... or perhaps just a button and automobile analytical code to call the car rental company if driving operation characteristics indicate incompetence or inability.
...have automatic disengage when the pilot moves the control column more than a certain amount; and also an ICO (instinctive cut-off) switch on the front of the control column to enable fast and complete disengagement.
Why this isn't a mandatory safety feature on ALL autopilots is a mystery; after all the slightest dab on the brake pedal has disengaged cruise control for years?
Mine's the one with the brown stain at the back.
"Why this isn't a mandatory safety feature on ALL autopilots is a mystery"
The risk of pilots aspiring to the Mile-High Club and accidentally knocking the stick with a flailing limb while not actually sat at the controls? A bad time to disengage autopilot.
It's also a feature of autopilots on ships post the Torrey Canyon incident. However in the case of most military aircraft and supertankers there's no direct connection between the controls, so you can move them from stop to stop with no effect other than the autopilot disconnecting.
In the case of a light aircraft doing that would force the controls from full deflection one way to full deflection the other, which may not be great for the continued integrity of the aircraft. There should be a cut-off switch though and it sounds like the main causal factor was a lack of training in the intricacies of the system. I'm honestly surprised this doesn't come up more often in General Aviation as it's not unusual to find yourself flying two outwardly identical aircraft that have completely different avionics, radios, and autopilots.
As for training in use of an autopilot... satnav use has been included in the driving test now. Certainly the number of incidents involving them, the driving lessons should include sections on common sense, following signs, map reading and not trusting technology!
'If you are on the ground, how do you know it is functional...
Also in this case it appears that the autopilot was functional, but operated incorrectly'
Many systems allow dummy loads etc. to be applied that then make the system respond as if it was in flight, although obviously that doesn't give you 100% confidence in the system. E.g. for height hold engage the autopilot and then adjust the altimeter reference pressure and look for a response.
In this case the autopilot was functional, operated correctly, but the operation wasn't understood by the pilot.
"Also in this case it appears that the autopilot was functional, but operated incorrectly"
To be clear, the autopilot was functional and worked correctly, but it was operated incorrectly by the pilot.
The pilot had been trained that circumstances would cause the autopilot to switch from normal law to alternate law, but in his panic, he forgot his training, as well as how to get out of a stall.
Surely the autopilot needs to clearly say what it's doing. Something like "autopilot disengaging, unexpected control inputs, pilot has manual control." Or whatever.
Or if it's deemed safer to correct for unexpected use of controls, on the grounds that most times this is accidental - then equally it needs to pipe up and say "error - unexpected control inputs."
But to silently counter what the pilot is doing seems like a really stupid way to design a system. What if the pilot has just noticed another aircraft, or a mountain. They're not trained, and so may panic and just reach for the controls without disabling the autopilot. Then you've designed failure into the system.
You just can't expect the same levels of training from private pilots as military and commercial ones get. It's not practical. Commercial pilots do regular simulator drills on common emergencies - so that when those warning alarms start going off they're prepared for them and know what to do. Those kind of realistic simulators cost an absolute fortune, and time on them is limited and expensive. And you need to refresh that kind of practice regularly.
Even if a pilot knows what to do in theory, it's another thing entirely to actually do it right when the brown stuff hits the rotating air-movement device. That's why professional pilots have to drill.
'You just can't expect the same levels of training from private pilots as military and commercial ones get. It's not practical.'
I'm not expecting the same levels of training, but if they haven't been trained how to use an autopilot they shouldn't be using it.
Some military aircrafts AFAIK also have a "panic button" which will put the plane level and at a safe altitude if the pilot becomes disoriented. But they have a lot of sensors on-board and computers with full authority on flight controls to allow for it.
But the cost of a military aircraft is far, far higher than those of many light planes used in training, which can also be old models fitted with basic and/or older avionics as well.
Even a button/switch on the controls can be activated by mistake, especially in a cramped space.
It could be dangerous if the autopilot disengages while you're looking at charts, checklists, computing a route, etc, just because you hit the controls, and you're the only pilot. Sure, glass cockpits and enhanced autopilots can do a lot more, make the pilot burden lighter and issue better warnings, but not everybody is so lucky to have them.
The report the Cessna 172 was fitted with a Garmin G1000 glass cockpit and a GFC700 autopilot, so it had a fairly advanced avionics. It wasn't fitted with a terrain avoidance system, unluckily. It also had a autopilot quick disengage button on the wheel.
But the system didn't report dangerous situations like the older Bendix KAP140, because Garmin thought it wasn't necessary, telling pilots know they have to disengage the AP to take control.
It looks also the pilot changed mode several times, as if experimenting - quite dangerous in a solo flight close to the ground. At her license level, there were no formal requirements of autopilot knowledge, although during training it was used in high-load situations - and advanced systems like the G1000/GFC700 combination are not simple to use.
So it looks the systems relied on the pilot to be fully aware of the situation, while the pilot wasn't, and little help came from the system to warn about a dangerous situation.
The problem is in a small non-fly-by-wire aircraft is the control stick and the surfaces are directly connected mechanically. The autopilot can't tell if movements are taking place due to the pilot or due to air on the control surfaces. The pilot simply must turn off the autopilot if they want the aircraft.
'Cars can though and they're mechanically linked.'
The brakes tend not to feedback to the pedal at any time though, with a light aircraft you can feel the control surfaces being moved by the airflow through the controls. So something like a cruise control cut-off isn't possible in a typical general aviation aircraft.
Applies to autopilot, and flight control where the pilot doesn't really have full authority (all related in some examples).
Designs of the Man-Machine interface has been *a* contributing factor to various crashes. China Airlines Flight 140. Or Aeroflot Flight 593. Many other examples.
(Please note the "*a*". Other factors exist.)
There are many examples of perfectly flightworthy aircraft falling from the sky, where if only the Man-Machine interface had been designed slightly better from the outset, then many of those crashes would almost certainly have been avoided.
What's annoying is that any informed layman could have written down some common sense requirements to ensure that such systems were better designed. It remains inexplicable how they managed to avoid such common sense so completely for so long.
Subsequent software or system updates, to insert the missing common sense, proves the point comprehensively.
You may disagree if you like, but then you're making the same error in judgement that's a root cause of these sorts of incidents. There's precisely zero valid counter-arguments.
To be fair, there's not enough common sense in the world to go round. There are a lot of basic decisions you have to make when designing that control interface. And they've got a lot of them right. But then you come to designing the more unusual cases. And there's often a dilemma, in that you may have more than one problem to try and correct for.
Another factor is that in military design - you may choose to train the pilots to not do some particularly dangerous thing, because that's cheaper than fixing the problem on the aircraft. Or fixing that aerodynamic problem may reduce the performance of the aircraft in other ways, and so not be desirable.
Similarly with commercial pilots being so highly trained, they're expected to handle a lot more than private ones.
I'm often amazed when watching those reconstructions of air crashes - just how much information the system is trying to get into the pilot at the same time. And I just don't believe that even the best trained pilot can take it all in - while still having time to think what the warnings mean - and of course time to fly the bloody plane.
For example there was that Qantas Airbus, where the computer went bonkers and was pushing out master alarms and computer warnings so fast that the messages were just disappearing off the screen faster than the copilot could read them - or press the master alarm cancel button. The crew didn't panic in that case and did their diagnostics well, but I could well imagine that just becoming overwhelming and causing an otherwise airworthy plane to crash. As the computers confused the pilots on Air France 447 - and by the time the captain had got to the cockpit and worked out what was really happening, it was too late.
I heard a really clever company on a Radio 4 documentary several years ago. They'd decided that private pilots struggle far more with warnings than commercial ones (with the advantage of regular simulator practise for emergencies). And colleagues to help manage the crisis - and split the work.
Their idea was that once you've got more than one electronic warning device going off, you're just not going to be able to take the information in fast enough. So they updated private plane's flight controls with non-electronic warning voices. In this case it was the pilot's wife. On the theory (backed up by testing) that his brain was wired to react to info coming from a real person that he knew.
Of course, if you're used to arguing over the satnav in the car - or just saying, "whatever you say darling" - then in this case you're probably going to die. So choose your recorded voice carefully.
Also, a bit odd/disturbing for the wife. Who's going to have to record messages like, "Warning terrain!" and "Pull up!" and "Stall!".
WRT using a familar / wife's voice -
.... surely, after enough weeks of marriage, anyone's wife's voice has been effectively filtered out as irrelevant, ignorable and forgettable witterings about whatever happened to someone you neither know or care about at her workplace, or in a soap opera, so giving really crucial warnings in a partners voice will be dismissed and ignored with a reassuring "yes, Dear"?
"...so giving really crucial warnings in a partners voice will be dismissed and ignored with a reassuring "yes, Dear"?"
I don't think so, because wives get savvy to that technique and start using it against you. "Yes, Dear" tends to stop fast once the husband realizes he'd zombied the answer after the wife had said, "Well, I'm going shopping now." or "Well, I'm off to see my mum." (the former because you've just given the wife carte blanche, the latter because the mum's likely to cast you in a bad light, leading to marriage difficulties). Basically, the second thing a husband learns is to always pay attention to what the wife says, in case she's trying to numb or train you.
'What's annoying is that any informed layman could have written down some common sense requirements to ensure that such systems were better designed.'
With hindsight yes, the problem is predicting the unlikely things people will do in an emergency situation before that happens for the first time.
It's also notable that in the China Airlines example you give, Airbus had a fix but the airline had yet to install it.
"if a force is applied to control column while the autopilot is engaged, then the aircraft’s autopilot system will trim against the control column force"
What possible use could that serve?
If it is not feeding through your actual input, WHY SHOULD IT TRIM TO COUNTERACT THAT INPUT???
So basically if you pull up, it ignores the pull up and issues a nose down. ie, when in autopilot mode, any pitch input is inverted. MENTAL !
Sounds bonkers to me, and only likely to lead to disaster.
IMHO, if the manufacturer failed to mention this completely un-intuitive behaviour in it's documentation, this accident is basically their fault.
Sounds like an obvious consequence of a poor design, rather than an intentional one.
So incompetence, not malice.
If the aircraft is supposed to be flying straight and level and starts to nose-up, the autopilot needs to apply nose-down force.
So far so obvious.
If it has control of the trim tabs, anything it does not know about that causes flight to deviate from straight and level might adjust the trim tabs.
If it doesn't have any way of knowing whether that attitude change came from the pilot or external forces, moving the control column would take it out of trim as the autopilot does exactly what it was designed to do.
It would seem that is what happened. And it's inevitable from such a design.
What possible use could that serve?'
With fully mechanical controls you don't know if the control disturbance is due to the pilot or feedback from the control surfaces themselves. Normally it's feedback from the control surfaces due to turbulence etc. which the autopilot then corrects for, occasionally it's the pilot inadvertently touching them which the autopilot also corrects for.
On fly-by-wire or hydraulically assisted controls they normally fix pick-ups to the control inputs to identify where the input is coming from, but you won't get that on a light aircraft.
As the article mentions, the flight school didn't seem to have appropriate training in place for the autopilot, so whether the manufacturer mentioned it or not (they would have) is irrelevant.
On many light aircraft the autopilot doesn't control the actual control linkages (Which are mechanical) but only the trim system, which has enough control authority to perform normal "autopilotable" maneuvers.
In this case the pilot pulled on the yoke causing the nose to rise. The autopilot doesn't know better than to try to correct this so trims down. The pilot probably instinctively just pulls a little bit more on the yoke, but this cause the autopilot to trim down even more. The elevator loses a lot "nose up" control authority when the trim is full down. So it might nog have enough to keep the aircraft climbing at lower speeds. When the pilot relaxes force on the yoke the aircraft immediately goes nose down into the terrain.
This could have all been avoided with proper training for use of an autopilot on a light aircraft, and the TO is as much to blame as the pilot.
"If it is not feeding through your actual input, WHY SHOULD IT TRIM TO COUNTERACT THAT INPUT???
So basically if you pull up, it ignores the pull up and issues a nose down. ie, when in autopilot mode, any pitch input is inverted. MENTAL !"
On the contrary, it is the logical result of adding an additional system to a directly manually controlled aircraft.
The autopilot does not see the control input. What it does see is the aircraft deviating from a normal attitude. It has no way to tell this from a shift in CG due to people or luggage moving, or aerodynamic effects from icing, bird strike, or structural damage. All it knows is that the aircraft has gone out of trim, and it tries to correct the problem, as it is designed to. This would be expected behaviour on small manually controlled craft, I would imagine.
Yeah, I'm waiting for self driving vehicles. I want mine with no windows, and certainly not to front or rear, as I presume they will be millimeters apart. I want a toilet, shower, kitchen, couch and TV, and I intend to use one or all of them on the way to work. I expect that once out of urban areas nothing other than gentle braking or acceleration will be required because there won't be any idiots either pedestrian or other drivers.
I expect my commute to work, should I choose to accept it to be quite pleasant, of course, with a few other additions, I might seriously consider not needing the house.
I also expect that it will go off and get itself serviced while I'm at work, otherwise, why the hell is it autonomous. Perhaps if it can pick stuff up from B&Q and the supermarket while I'm at work as well...
"I expect my commute to work, should I choose to accept it to be quite pleasant, of course, with a few other additions, I might seriously consider not needing the house."
All this excessive commuting is really the cause. Services and goods need to move around, no doubt... but in proportion people doing so. All these meetings and/or collections of workers require a support system that in this day and age may not be as frugal as it once was, and certainly not so motivated by social interaction as it was in the past given what's currently *masking as people "socially" interacting.
As I'm watching this transpire in downtown Chicago, I feel there must be a very large % of wasted travel knowing this urban infrastructure can support remotely connected *processing (etc), and these ain't factory workers assembling to be an assembly line.
It's also quite evident that a large majority of these cars are transporting only the driver... and I'd imagine it's the same path and time every day. This is, however, that sense of freedom and independence, self reliance (and perhaps control) that driverless cars will never replace.. which I know is the reason I enjoy driving.
"All these meetings and/or collections of workers require a support system that in this day and age may not be as frugal as it once was, and certainly not so motivated by social interaction as it was in the past given what's currently *masking as people "socially" interacting."
But in a competitive business world, as the saying goes, "Ain't nothin' like the real thing, baby." Face-to-face interactions (IOW, intentional INefficiency) impresses and could mean the difference between inking a lucrative contract and losing it to a rival who put forth the effort.
I used to interact with colleagues around the world via email. There was one colleague in Australia with whom I had a long-lasting and intense (but polite) disagreement that lasted for several months. The disagreement was settled within minutes of our meeting face-to-face when we could interact directly. Electronic communications put a distance between people and remove normal social interactions, even using video links.
"The disagreement was settled within minutes of our meeting face-to-face when we could interact directly."
It is remarkable. I've experienced that face to face meeting of someone that I've "known" for a long time only thru online or e-communications that my impressions are so... just way off... and not only what my minds eye has created, but how mannerisms either reveal or weakly hide what spoken words are. Even having just phone conversations create biases.
"But in a competitive business world, as the saying goes, "Ain't nothin' like the real thing, baby." Face-to-face interactions (IOW, intentional INefficiency) impresses and could mean the difference between inking a lucrative contract and losing it to a rival who put forth the effort."
I don't disagree... except for the blatant conspicuous consumption part. However, I suspect that contract signer that's wooed by this seems to be one of P.T.Barnums (sic) suckers.... perhaps a moth drawn to the flame you've provided... or perhaps competitive business has the perfect training wheel set to keep the classhole ball rolling...
I've watched enough Seconds from Disaster and Air Crash Investigation to know that this is not the first time such a thing has happened. You'd think these systems would be intelligent enough by now to auto switch-off if they are not in their boundary conditions for operation - hell my car's adaptive cruise control does that.
Sad for all involved. Multiple mistakes by a Pilot combined with a poor autopilot combined with a bizarre 'no requirement to achieve a competency in the Autopilot system' before a solo flight. Pilot fail, Regulation fail, Training fail.
Having said that regulation for In-Car systems need to take a look at Human Factors and interactions with systems, Aviation has long been ahead in this research and for good reason: lots of lives depend on it.
For good Automation take the Airbus control systems that allowed the A320 ditch in the Hudson since the pilot was able to hold the stick all the way back and the automation kept the plane on the edge of the flyable envelope. I've been reliably informed a Boeing 737-800 does not allow that as the pilot is the master and most likely would have clipped a wing and cartwheeled.
There should be minimum standards that need to be enforced for how systems can or cannot override humans and also how warnings and alerts are handled.
Authorities need to get their act together for car systems ASAP unless they are happy to add regulations based on a Tombstone mentality. Right now it looks like 'haven't a clue' disguised as 'laissez faire'
There's a problem with that Airbus fly-by-wire though. I'm sure Sullenberger was up with all the failure modes of his aircraft - because he'd written books on safety and emergency responses.
But one of the probably causes of the Air France 447 crash was that the pilot who stalled the aircraft had been trained that he couldn't stall it. Which is true in normal mode. But when the computer system goes into certain modes, such as when it can't trust its sensor inputs, then it stops protecting the pilots from stalling.
So it's a different control philosophy, which could have different outcomes depending on circumstances and pilot training and familiarity with their systems. Most simulator training is for "normal" type emergencies, like landing with faulty undercarriage or single engine failures / fires. Because simulator time is expensive, and airlines want their pilots flying not practising. They don't tend to train for total engine failure or computer/sensor failure regularly. Which will hopefully change.
There is definitely a problem with a new generation of Pilots that are used to automation first as opposed to purely physical flying with automation added. As the Asiana LAX cash showed some of the training is not covering basic flying skills so when the ILS was off the crew were no good at hand flying the plane, bonkers.
You probably right the 447 pilot was over-reliant on the systems.
In Flight 447 there were two junior pilots on the deck, the senior pilot was on a break.
The pitot was blocked for 58 seconds and they just needed to keep the plane level and things would have been back to normal in 1 minute.
Instead one pilot pull back the yoke, then even when stalling and dropping like a stone pulled back even more when they needed to head down to gain speed.
Bottom line was they panicked and did not have the instinctive knowledge to calmly deal with the situation.
Sullenberger had 20,000 flying hours and military jet experience and there are very few flying Pilots today with that experience. There was an EasyJet documentary on new Pilots and having seen the trailer i could not watch it. I fly a lot and have numpties like that trained up gives me no confidence.
The reality is if every plane in the sky experienced some v. serious event you'd be lucky if 50% of those planes get down safely. Its just that most flights do not have really serious events.
'As the Asiana LAX cash showed some of the training is not covering basic flying skills so when the ILS was off the crew were no good at hand flying the plane, bonkers.'
Minor point, the Asiana crash was at SFO not LAX. Interestingly, when the NTSB investigators asked a range of pilots, including some from other airlines and the FAA test pilot who certified the Boeing 777, the only person who could correctly explain the autopilot modes, and that the auto throttle would cut out in the situation the Asiana pilots found themselves in, was the Boeing Chief Pilot.
It's also a good reminder to always fasten your seat belt for take-off and landing as of the three people who died, two hadn't. One of whom was probably killed by being run over by a fire truck. Twice.
I've an habit to keep them fastened all the time. After all, that's what I do in a car, and they really aren't uncomfortable you have to unfasten them as soon as you can.
You know, heavy turbulence, etc. etc.
Another fine example of pilots fighting the automatic controls, losing the fight and paying with their lives.
And finally being blamed by the people who designed and/or approved the faulty system.
What makes this even more bitter is that pilots were ordered by management to use autopilot while landing. Because every non-techie knows that complex automated systems are for more trustworthy than trained professionals.
'Because every non-techie knows that complex automated systems are for more trustworthy than trained professionals.'
The historic accident record would indicate that, as far as aviation is concerned, that is in fact correct. The problem is you don't see the accidents that would be happening if all landings were made manually.
"The historic accident record would indicate that, as far as aviation is concerned, that is in fact correct. The problem is you don't see the accidents that would be happening if all landings were made manually."
Why would you say that? You seem to think that most landing are done on autopilot. That's not correct, most landings are done manually. Usually, only landings in poor weather conditions are done on autopilot.
Also, take into account that pilots are blamed, even when it's blatantly obvious that it's the precious autopilot who crashed the plane. So you can't trust the historic accident record either.
'That's not correct, most landings are done manually.'
The landings that are done manually have all the complicated systems, the ones you seem not to trust, turned on to help the pilot fly the aircraft smoothly, having them do it fully manually would be far more dangerous. Notwithstanding the pilots having to make 4 manual landings in 90 days to keep their licence current, the auto-land is consistently better at capturing the localiser and glide-slope and maintaining it all the way to touchdown. On long haul that does mean most of the landings are 'manual' as they barely make enough flights to stay current, on short haul I believe some airlines actually forbid manual landings unless required for currency as humans are less efficient at it.
'Also, take into account that pilots are blamed, even when it's blatantly obvious that it's the precious autopilot who crashed the plane.'
No, you're going to have to provide some actual proof for that, maybe a link to a few accident reports where you can prove the pilots were unfairly blamed. Otherwise you're going down the conspiracy theory route of saying you can't trust the evidence because that's what 'they' want you to think.
It's worth noting, when I say the historic accident record I mean the fact accident numbers are at an all time low, even before you normalise for the increased rate of flying it has never been safer to sit in a commercial airliner irrespective of how it's flown. For example there were no passenger jet crashes in 2017, beating the previous year which was already a historic low.
So even if all the recent accidents have been due to the autopilot it's still safer than it was a few decades ago when there was more human interaction and less people were flying.
Graphs to prove my point:
Safe AVs that don't have accidents kinda put them out of business, don't they?
Will they be hiring teams of hackers to find ways into AVs and cause them to have accidents, thereby necessitating the need for insurance for that little bit longer? Or will they infiltrate AV manufacturing, putting saboteurs on design teams and maintenance crews?
I think I quite like the look of the future; it could yet turn out to be a cyberpunk dystopia after all - won't that be fun?
"That was my first thought but then i realised that a safe car means "No insurance necessary.""
Except there's always Murphy. Only a clairvoyant car would anticipate the bridge its driving on suddenly collapsing mid-span. Even if they cut the rates down, as long as they pay out less than they take in, they're happy.
Ah, but, you're conflating the AV manufacturers with the insurers.
What you describe is from the insurers' perspective - as long as they take in more than they pay out, they're happy.
But that doesn't cover the AV manufacturers' PR issue of "What do you mean your car isn't safe enough for me not to need insurance?"
If the bridge collapses then I, as the passenger, do not expect to pay out to the family landed on below - I expect the owners/maintainers/builders of the bridge to do that. So, if your AV needs insurance, it means you can't guarantee that it will drive accident free - in which case, remind me again how AVs are safer than human beings.
No, the AV manufacturers need AVs not to need insurance - otherwise it's open to debate whether the vehicles really are safe. Don't blame me, I'm only the messenger - but I live in the U.K., where the Press will be all over you like a rash the second anything goes wrong and advocating for you and your family to be strung up from lampposts and left swinging in the wind.
"No, the AV manufacturers need AVs not to need insurance - otherwise it's open to debate whether the vehicles really are safe."
Guess you never heard of a "no-fault" crash or "acts of God", where accidents happen through no fault of any party. That's why I mentioned the bridge collapse. There are plenty of "no-fault" accident possibilities, such as lightning striking a nearby tree, a blind corner into a sudden fog bank, or a brake line spontaneously bursts. That's why some drivers need to carry "no-fault" insurance by law (depends on the jurisdiction).
'No fault' isn't the issue. What's at stake here is the manufacturers' PR: unfortunately, people aren't rational and all it takes is one person to publicly question the safety and, suddenly, it's a whole different game. No fault insurance is an oxymoron: if I'm not at fault then I'm not liable - and I don't need insurance to cover things I'm not liable for.
"No fault insurance is an oxymoron: if I'm not at fault then I'm not liable - and I don't need insurance to cover things I'm not liable for."
No, it's not. Otherwise, disaster insurance (fire, flood, etc. some of which are also required by law) would be an oxymoron, too. No-fault and collision insurance cover repairs beyond your control. Otherwise, you'd be paying the entire cost (up to and including a total replacement) out of pocket. Insurance in general isn't a protection against liability but broader: a safeguard against sudden hardship, of which liability happens to comprise a subset.
That I agree. ALL vehicles, human-driven or not, generally need protection against accidents not of their making, so "no-fault" would be the default for AVs. Liability would have to be taken out by the manufacturer instead of the driver, though, in case an accident is traced to a manufacturing flaw.
So, effectively then (as far as the passengers are concerned), no insurance required - although AVs are a long way from that yet, I think, given that they can't (yet) be left to their own devices.
Of course, there is the question of whether owners would be allowed to plead ignorance when a vehicle with a known problem is involved in an accident or whether we will all be legally expected to keep up with CVEs and avoid such vehicles - I imagine recalls will be more frequent (and possibly announced more vociferously by the manufacturers?)
So, after reading the article, you decided that the problem is that AVs would be TOOOO safe???
So, after reading about a dystopian cyberpunk future in which insurers put sleeper agents into AV design and manufacturing teams to sabotage them, you decided that the poster was being serious?
A self driving, electric, shared car may be fine for some large metro areas and people. But, here in the mid-west of the US it isn't going to be very useful. And, in places like the Rocky Mountain region a 200 mile range isn't going to get you anywhere and waiting for a shared car to become available in some small places would be absurd. Not everyone lives in Silicon Valley, NYC, LA, London, etc. A lot of real people have a real need for real vehicles on their own schedule.
Biting the hand that feeds IT © 1998–2019