In the time a human driver goes OH F*** <bump>...
Your average self-driving car will have time to run face-recognition,
browse thru your credit record & rating on social networks??
The question of the infamous trolley problem for self-driving cars has finally been answered – by humans. The people have spoken. Neural networks, take note... Imagine a robo-ride is about to crash into either a kid or a bunch of elderly people. It cannot brake in time, nor swerve out of the way. Where should it go? Who should …
Exactly my opinion having been there (and you can leave out the asterisks, I got as far as Oh F <crash>).
The reality, I believe, is that in most cases either there is insufficient time to make a decision that will materially affect the outcome or the vehicle will be out of control anyway.
Yeah, if the car has time to actually take any action it has time to break and not hit anyone. The only time a self driving car should hit a pedestrian is if they have literally jumped in front of the car. The 2014 Corvette has a stopping distance of 90ft / 27m when traveling at 60mph, or 1 second if you prefer that measurement.
This is not actually a thing that needs to be programmed into the car.
It's pointless trying to guess what has the least harm. A child is lighter and more resilient so hitting it instead of an older person might do less harm overall. And that child might grow up to be the next Hitler or Nigel Farage, so the car might be doing us all a favour by killing it anyway. Why not just calculate the action with the lowest speed of impact and do that?
@Naich (and others)
You're rationalising. Yes, a child (or younger adult) is much more likely to survive with no long-term harm than an older person. "Life begins at 40" is all about the stage of life where your body starts really noticeably to lose its capacity to recover from adversity.
But these kind of rationalisations are altogether excluded from a survey posing binary questions. At best, your rationalisations put you into a survey's "don't know". Or get lost in a middling number in an "on a scale of" answer.
Yeah, if the car has time to actually take any action it has time to break and not hit anyone.
The "evil" pedant in me wonders about usage of the word "break" here. "Break" as in fall apart? Or perhaps you meant "brake" as in stop the car? The first was a rather amusing muse of a car disassembling itself as part of the avoidance process so have an upvote.
> Exactly my opinion having been there (and you can leave out the asterisks, I got as far as Oh F <crash>).
Interesting, as I am currently recovering from a crash I had less than a month ago. My experience was the opposite. As the crash was happening it seemed time slowed down immensely, and I had all the time in the world to make decisions.
My problem wasn't making a decision. As you mentioned, the problem was that I couldn't really make the car react fast enough to do anything apart from pick where the inevitable impact was going to hit (the car was already skidding so there was limited grip to do any corrections). I managed to slow it down a bit, and managed to position the car so that the impact would be as far from me as possible (front passenger side, which was unoccupied), and that is about it.
Although I do agree, the exact words used when I realised what was about to happen was "oh F***", then had the crash. Not the most eloquent of potential final words, I admit, but it isn't like anyone would have heard it anyway.
I only have one anecdote about having a crash (and I don't really want to deliberately put myself in that situation in order to get more datapoints), so it is interesting to hear others stories. Although I will say I never realised quite how fast 60mph actually is until I was approaching a wall at that speed with limited control. When normally driving it always felt quite slow.
My one experience of running my car off the road went like this:
"Oooh, the back end's coming out, this will be fun"
"I've got it, I've got it, oh s*** I haven't got it"
"I think the car's going to stop before I hit something"
"Oh dear, is that a wall approaching?"
"Ahh, well that appears to have stopped me"
I was fine, the car wasn't. All entirely my fault, and very lucky there wasn't any cars coming the other way. I suspect if the car was equipped with stability control (this was a while ago) I would have got away with it.
I have experienced the "time slowing" thing. People think it's an exaggeration but it's not.
I was driving through a rainstorm at night. Had navigated to a random point on a map, so literally had no idea where I was or where I was heading.
Emerged from a forest, into a little village, miles from anything. Only the pub was actually lit up, the rest was just houses and incidental lighting. Passed the pub, 20-25mph or so (it was seriously belting down), followed the road, and ended up with a bridge in front of me.
Literally, I can remember my entire thought process. A sign on a pole appeared and passed the front of my bonnet. Through the rain-soaked windscreen it was tricky to make out but I saw it and my brain processed it. It was a little car. Going downhill. Into some wavy lines. I *know* I know what that means, but I can't think of it. Literally - from my brain's point of view - many, many, many seconds of debating happened as I tried to reason what the sign was. Meanwhile I drove up onto the "bridge"... Very steep this bridge. I wonder why they have a bridge in the middle of nowhere.
And then brain finally decided that it had thought long enough and brought back reality to me. Not bridge. Harbour. Not "the road is made of bacon" but... this is the end of the harbour and you're about to plunge into the ocean. Amazing, considering I had *zero* idea I was near the ocean at all. Never pressed the brake so hard in my life and it appeared to take forever to stop - I can remember at least "ten seconds" of me just pushing the brake to try to hasten the stopping, and it not happening... after the long internal conversation to do so.
I literally spent the next ten minutes with my car at a 25-30 degree up angle on the ramp, full beams shining off into the sky, the bottom of the beam just catching the top of 12-foot rolling waves as they smacked against the ferry-docking-ramp I'd just driven up.
1) I can't swim.
2) I did not know I was near the ocean, so would have been utterly unprepared.
3) It was 12-foot-waves. No exaggeration.
4) Because it was a ferry port / harbour there was no easy way back up to dry land even if I could get out a car that fell into water bonnet-first.
5) It was pitch black, middle of the night.
6) Because of the huge rainstorm, nobody would have heard a thing. The pub was shut, it just had lights on.
7) I'd just split up with my wife and gone on a drive to escape... so nobody was coming to look for me even if I was missing.
I sum those to equal "death", personally. It's the closest I've ever come to it.
However, when I recovered from the more-than-slight shock, I realised several things. Including that the sign I "passed" was parallel to the passenger door. I'd barely encroached a few feet up the ramp. Given the conditions, that tells you how slow I was being anyway, but there is NO WAY I had time for the internal-conversation that took place.
I can remember the length and detail of that internal conversation, which must have been literal fractions of a second, and it far exceeds reality. Either your brain massively overclocks in an emergency, to get more done in a short time, or something weird happens to your perception of time.
"Although I will say I never realised quite how fast 60mph actually is until I was approaching a wall at that speed with limited control. When normally driving it always felt quite slow."
I like to do this to people (my kid especially). Drive along normally. Pick a landmark. A lamppost. An old lady. Whatever. Now, in your head, picture what it's like to hit them as you drive... literally see how quickly they would go from being "in front of the car" to "up in the air behind you before you could even really brake". The distance you cover at motorway speeds is stupendous, but even driving along a side road.
There's the old lady... here we go... BANG-CONTACT-FLING-SPLAT as the front/windscreen/roof/back of your car passes the point she's standing at. It's amazingly conducive to realising quite what speed does.
Some studies of this effect attribute it to the brain not discarding memories from a point of intense emotion, especially fear. They have done some tests by having people fall off buildings (they were OK with it) and the brain does not appear to overclock. I don't know if anyone's found something different, but that's what I read a while ago.
I'm glad you did not fall into the harbor. That sounds like a terrible experience.
@Lee - great post!
"your brain massively overclocks in an emergency, to get more done in a short time"
Most definitely this.
Also, we humans don't directly measure our stream of consciousness in seconds, minutes etc, but by experiences/events happening. That's why we get the impression that time flies when having fun or when highly concentrated on a piece of work.
> Either your brain massively overclocks in an emergency, to get more done in a short time, or something weird happens to your perception of time.
From your own account, the amount of time it took you to recognise the sign suggests the second one. If your brain was faster, you'd have recognised it sooner. In fact, your brain's emergency mode did not lead to you reacting quicker. As others have said, you just remembered more detail afterwards.
"Either your brain massively overclocks in an emergency, to get more done in a short time, or something weird happens to your perception of time."
I strongly believe it's the latter. The conscious parts of our brain are a bit delayed, and often our "rational thinking" is just rationalizing things that have already happened rather than truly coming to any decisions. The conscious brain will also lie harder than Trump that this isn't the case at all and that it's totally in control at all times.
I suspect what happened in your case is that the lower parts of your brain recognized the mortal danger you were in, and started screaming "stopstopstopSTOPSTOPSTOPDEATHSTOPDEATHSTOPYOUMORON". Your foot was on the brake while your conscious mind was still going, "Huh? What's that noise? What're you going on about now?" Then, when the car stopped, the screaming subsided into something it could understand - a sign, harbour, water, danger, death, must stop. Already stopped? What? Fake news! I'm the decider! Clearly I did the research, analyzed the situation, and decided on the best course of action. It took a while, it sure was a hard job, but I did it! Nobody else could!
And your lizard brain rolls its eyes and says, "Sure, buddy, whatever you say. I reacted in 200ms and saved us cause that's something I was evolved to do. But you can't do SHIT in 200ms so it must've really been half a minute. Sure."
Either your brain massively overclocks in an emergency, to get more done in a short time, or something weird happens to your perception of time.
Or your mind retrospectively creates false memories of what you experienced during the moments of extreme stress. It's just as likely that your conscious cognition isn't doing much of anything useful in the moment.
I've had a couple of near-death experiences, a couple of auto accidents, and perhaps a few other similar episodes of brief, unanticipated, highly stressful stimulus. I'm very suspicious of my memories of them. Even where there were witnesses to corroborate my recollection of the basic events, I suspect I wasn't doing much in the way of conscious thinking at the moment.
A long tradition of psychological and neurological research (Helmholz, Libet, Damasio, etc), much of it pretty methodologically sound, casts cold water on the idea that our conscious thought processes have much opportunity to influence our decisions and actions when first responding to a stimulus. They're just too damn slow. We can hope - I do - that conscious thought processes help condition our preconscious / unconscious ones, so that future decisions and actions will tend to be what we'd do if we did have time to think about our responses. But that's about it.
Yes a computer can do many of those things. Your reaction time is measured in tenths of seconds. Not to mention, the car will ALREADY be tracking every person/object in view so it doesn't have an "Oh ****" moment.
I do sort of agree though. A human doesn't try to decide who to hit so for this tiny edge case SHOULD the aI be making these decisions? It should probably work out who it can hit in the least damaging fashion and how many people to take out, nothing more.
the car will ALREADY be tracking every person/object in view so it doesn't have an "Oh ****" moment.
Almost... More likely:
the car will ALREADY be tracking every person in view and looking up their purchase history, credit rating and recent search topics so it can select irrelevant adverts to display on the car doors.
The computer won't be going "Oh F**" like the human driver would.
The computer would probably not have got into that situation in the first place. If there are bad road conditions it would have slowed down.
It won't be distracted like the human who is changing the station on the radio or reprogramming the satnav - it will always be paying attention.
If there are pedestrians it would already have scanned and judged them whilst they were walking along the side of the road and determined which would top its kill-list should they step out - it wouldn't do it only when they became a problem...
I hope the computer in these self-driving cars is better than the one on my Toyota.
Its road sign recognition system often detects speed limit signs down side roads and displays them as if they apply to the road I am on.
Several times it has "detected" speed signs of 80 and 90 mph in the UK.
It's best trick is to suddenly decide that all the UK speed signs are now in kph and warn me that I am going too fast. Then, after a few minutes, they are back in mph again.
Recalibrating the camera made no difference and other owners have reported the same issues.
Self driving cars are really good at maneuvering and braking the car, better than almost all human drivers. They are relatively terrible at identifying things, current worst than most drivers so we have a way to go before making value judgements about what to hit. Once we get there it's time to apply Steinbach's Guideline for Systems Programmers - "Never test for an error condition you don't know how to handle."
And likely to stay that way for a long while, judging from the fragility of current automatic driving machines (I avoid the term AI as they can't really be classed as such).
A Waymo would very likely kill the woman in a large flappy coat pushing a buggy (unrecognised shape) to protect a poster of tommy robinson ("A loathsome, obnoxious, repellent individual").
The time slowing down in an accident situation is very real, I have experienced it more than once in cars and on motorcycles.
As for self driving cars, my last car - a Nissan had some autonomous features, including forward emergency braking.
A lady not looking my way stepped into the road in front of my car, she then looked my way and saw a car and stepped back. I had gone for the brake but the car had already started braking and pulled up about 2 metres short of the pedestrian.
I've since changed to a Toyota with even more autonomy and love the safety stuff.
"The time slowing down in an accident situation is very real, I have experienced it more than once in cars and on motorcycles."
Hmm, last time I came off my bike I remember thinking "You take a long time to slide to a stop when you come off at 110 km/h". It might have been time slowing, or it might just be that you take a long time to slide to a stop when you come off at 110 km/h.
Actually, it was likely more than 110 km/h, I was doing the speed limit, the guy behind me wasn't, I may have been sped up before actually hitting the road.
The time slowing down in an accident situation is very real,
Indeed it does. I rolled a drag racer at half track... managed to think and tried stepping on the brake (while upside down... brilliant), pulling the ripcord for the parachute (straightened the car out and actually slowed it down pretty quickly), and playing with the steering wheel and stomping on the brake pedal amongst other thoughts like "why is everything upside down?" and "why aren't the brakes working?". Actual time of the "event" was maybe 5-10 seconds. The mind is a strange thing at times. Luckily only damage was to the car and my sense of immortality (I was in my 20's).
Sidebar... I stopped racing after that season.
"I rolled a drag racer at half track"
Did you get anything good? All the drag racers I have ever known (self included) spend all their loose change on spare parts ...
I high-sided a bike at Thunder Hill once. At about 140 MPH ... a friend & I were practicing drafting, and swapping the lead back and forth ... He cut in a trifle early and my front tire hit his rear as we were accelerating out of a sweeper. Not good with hot sticky race rubber. I remember thinking "Well THAT was a daft thing to do! This is going to hurt. Pull in your arms & legs & get ready to roll. Shit, the wedding is in a week, SWMBO is going to be PISSED! I wonder if I'll be able to get a beer in the ER? Hopefully Doug will get the bike back to the house for me." and then I hit the deck. I wasn't in the air for more than a tenth of a second or so.
So once the "moral" decision has been made and "we" have decided that if the car has a single occupant and there is a choice between killing 6 cats or the occupant of the vehicle the occupant gets it. Who's gonna buy the car that will actively choose to kill them?
Will that be an additional feature like alloy wheels or airbags, e.g. if you buy this add-on when you spec your new vehicle the car will always try to save your life and always mow down push chairs.
When its not your life on the line then your obviously more willing to throw the "driver" under the bus but when it comes to the choice of killing yourself or mowing down a family that you have no vested interest in then will the decision be the same?
I'm not worried about you trusting a self driving car with your life. I'm worried about the number of people who think they are good drivers and are not, and their possible effect on other people's lives. Like that man who drove an SUV and caravan the wrong way down a motorway and killed an innocent person driving the right way.
Some years ago a study showed the majority of US drivers thought they were above average, whereas a much smaller minority of Swedes did. The Swedish death rate per mile on the roads is less than one third of the US rate. These things may possibly be related.
@Voyna i Mor
No 1 root cause for accidents lie between the wheel and the brakes. Eliminate that and you will make the roads statistically much safer.
An autonomous car follows the road rules, doesn't drive when tired or under the influence, doesn't think it drives better than the other cars, and can react much faster than a human to an obstacle appearing in front of it.
All these trolley problem variants are extremely marginal situations, and whatever their outcomes the number of victims will be peanuts compared to the lives saved by self-driving cars.
"All these trolley problem variants are extremely marginal situations, and whatever their outcomes the number of victims will be peanuts compared to the lives saved by self-driving cars."
Two problems. First, edge cases don't STAY edge cases. Second, these "trolley problems" raise serious questions of priority, which can never get a satisfactory answer on account of there always being a loser (and a dead one at that) at the end, and NO ONE wants to be that loser.
NO ONE wants to be that loser.
So it's down to who gets the choice. Back to familiar territory now.
But that's not quite the whole story. Not everyone who can afford a Chelsea Tractor uses one to drive little Quentin and Aurora to the school gates. And taking risks turns out to be good for you: cyclists have longer life-expectancy than non-cyclists despite a few of them getting killed on the roads.
and NO ONE wants to be the victim of a drunk driver.
Over the last decades, road casualties have largely decreased, thanks to better cars, tighter regulations on speed and drugs and a general awareness that the road is not a jungle. Now we can achieve another step by removing the human factor. There will still be casualties because no algorithm is perfect, but we can get pretty close to 0.
As others have pointed out, stick to a few simple rules and don't try have a solution to all situations: protect the passengers (they have put their trust in the car), stay on the road, brake, don't swerve.
Doctors face difficult choices every day, and our societies as a whole have accepted that not everyone can always be saved, however hard it is for the dead's families. Why wouldn't we accept it for cars?
"Doctors face difficult choices every day, and our societies as a whole have accepted that not everyone can always be saved, however hard it is for the dead's families. Why wouldn't we accept it for cars?"
Because you're confusing macro with micro. Society accept there will be losses, but the perspective always changes when it gets personal. IOW, it's always "someone else" until it's "you". The Trolley Problem tends to force the "you" part of the problem.
"Because you're confusing macro with micro."
I'm not confusing anything. I know full well that if I'm the one behind the wheel, I'll choose to save my kid and sacrifice 5 strangers. This doesn't change the fact that the society would be better off if self-driving cars ruled the roads.
"The Trolley Problem tends to force the "you" part of the problem."
And this is precisely why it's pointless to try to solve it for the whole population.
"Second, these "trolley problems" raise serious questions of priority,"
That will only happen when automated systems can make far more complex discriminations than people can currently manage, and very much faster.
At which point the situations may well not arise in the first place.
"Second, these "trolley problems" raise serious questions of priority, which can never get a satisfactory answer on account of there always being a loser "
And since there really is no 'correct' or 'satisfactory' answer to the trolley problem, why even try to get a computer to solve it? Why expect a computer, however intelligent or dumb, to 'solve' a problem that no human can 'solve'. Much better to let the computer get on with driving, and if a collision is unavoidable, then that's that.
In Sweden you can drive for an hour without seeing another motorist -
As I expect you can in Montana, or Wyoming, or the Dakotas, or Iowa, parts of Texas or California .....
But not of course in the populated parts of Sweden, where driving has become *a lot* tamer since my mother and her generation treated it as something of a free-for-all.
Can't believe saying something one owns will ever be unhackable got upvotes here, where most of us know that physical access == game over.
Do you work for the Oz (or US) government wanting LEO-only backdoors?
How many DVDs or games have had their DRM unhacked?
"Should be" is not a viable strategy...in either computers or stock market trading.
Think how much money such a hack would be worth (and of course, it would pre-exist for people the government thinks are "worth it") - and who'd benefit from selling it....it'd last as long as the DHS luggage master keys at most.
That makes a good case that there needs to be some sort of standard for how it should behave that all cars must obey.
Save the passengers, f... the rest. It's my car, it's supposed to protect me. What's so hard to understand?
Let's (all) be frank :P You buy a car in order to get you safe to your destination and damn the others.
"...the "moral" decision has been made and "we" have decided that if the car has a single occupant and there is a choice between killing..."
Here's the thing - the mechanics involved in a crash are highly complicated, and human beings and their internals are highly complicated. Therefore it's not always possible to anticipate who is going to die / be severely injured. The only thing that can be predicted is impact and speed. Many people have survived horrendous impacts, and many people have died / been severely injured from seemingly trivial collissions.
I tend to agree with what many people have posted already - autonomous braking on a car is already pretty good and the brakes on modern cars are stupendously good. If there is not enough time to bring a car to a stop there is certainly also not time to significantly change direction/target
It's the act of crossing the street outside of the municipality-approved crossing path, an egregious offense in any stuck-up, prudish culture that most definitely thinks it knows better than you. Another side-effect of the nanny culture, except this one dates from at least the Nazis before 1939.
I know because my grandpa told me about how he had found Germany back in 1938 when he was visiting. He found the local constabulary to be very keen on people crossing along the dotted path, and woe to anyone who tried to skimp.
Personally, if there are no cars coming in any direction, I am not going to wait to get to a crossing, nor will I press the button, I'll cross wherever I am. I see no reason to hold up traffic just for my personal benefit. Of course, if there is traffic, I'll be very careful about it.
"It's the act of crossing the street outside of the municipality-approved crossing path, an egregious offense in any stuck-up, prudish culture that most definitely thinks it knows better than you."
Often with traffic engineers that hold pedestrians in contempt. There's a particular pedestrian crossing across a major road near me. You push the button to cross the lanes going one way, wait up to ten minutes, whether or not there is any actual traffic, cross to the large traffic island in the middle, then repeat, only now you have to wait for the entire programmed cycle. Anyone that lives around here simply jaywalks and ignores the buttons. This has become a really big example to the students of the school that has their main gate very close to said crossing. They learn how to jaywalk at an early age.
After the fall of the Berlin Wall I was in Munich with a former colleague, an East German engineer. The road was completely free of cars, but a number of pedestrians were waiting for the Green Man.
He marched into the road, turned round to face them, and bellowed "Sie sind Schafe!"
Personally I definitely push the button. As a pedestrian you have just as much right to be on the road and under no obligation to be held up to keep traffic moving, although for some reason our illustrious leaders claim to encourage walking but go out of their way to slow them down as much as possible.
If they give pedestrians plenty of places to cross the road safely and without much wait, without making you walk half a mile to find a crossing, then you could maybe be a bit harsher on people crossing just anywhere.
Its not so much about the young but the less able or elderly should not have to rely on the kindness of the odd motorist just to be able to cross the road
As a pedestrian you have just as much right to be on the road and under no obligation to be held up to keep traffic moving,
This is an interesting part of the UK highway code that I never really noticed when I was doing my test:
Give way to anyone still crossing after the signal for vehicles has changed to green. This advice applies to all crossings.
"Don't run people over" is hopefully obvious to everyone, but "give way" goes further. And while pedestrians have their own section of the code and aren't meant to start crossing on a red light, once they're there you are expected to give way.
Just curious, how does "give way" go further than "don't run people over"? Had you considered a third option like "knock them gently out of the way"?
Compare a zebra crossing across a standard two way, two lane road. Cars are meant to give way once someone steps onto the crossing. Even if you're approaching in the lane on the other side of the road you should stop and let them cross, rather than driving over the crossing (provided you've got time to stop, and you're meant to have been looking). No swerving round them if they're not very far across either. That's how it's meant to work at other pedestrian crossings too, even if lights are green and they're barely out into the other lane, as opposed to traffic starts flowing again and forces them back to the pavement. (Though like the pedestrians crossing minor roads at junctions rule it's not one you'd want to stake your life on.)
"Personally I definitely push the button. As a pedestrian you have just as much right to be on the road and under no obligation to be held up to keep traffic moving,"
True, and if there is traffic when I want to cross, I would also press the button. However if there is no traffic now and I can get across the street now, I will do that, rather than press the button and then have to wait wait for the lights to change. I'm not doing it for the traffic, I'm doing it for me!
"However if there is no traffic now and I can get across the street now, I will do that, rather than press the button and then have to wait wait for the lights to change."
I have a habit of pressing the button and waiting anyway. Lifetime habit of an asthmatic, it's good to just stop every now and then to catch my breath.
When someone gets run over they put up a fence to stop other people getting run over in the same place. A few days later the fence has a hole in it and people cross there again. The entire exercise is pointless for many reasons. One data point is not sufficient to identify a dangerous place to cross the road. It does hint at a popular place to cross the road and the possibility that some people are not as good at crossing the road as others. If they used more effective barriers, people not capable of looking both ways before crossing will simply find somewhere else to die.
That's the problem with these binary choices, they don't allow for much nuance. In the 'young vs. elderly' situation, people will assume the elderly is a pensioner whose costs to the society (pension, health...) outweigh their current contribution, whereas the young person still has a lot of potential.
Note that another scenarios favors the higher status (which often correlates positively to contribution to society) or executive-type persons.
Also, if your 'selfie-taking, facebooking, instagramming twerp' is jaywalking, they'll be categorized as 'unlawful' and deservedly Darwinated.
In the 'young vs. elderly' situation, people will assume the elderly is a pensioner whose costs to the society (pension, health...) outweigh their current contribution, whereas the young person still has a lot of potential.
Costs and contributions? Bloody hell. Are you a Randian?
My decision to favour a child ahead of an elderly person comes from imagining that I, as a driver, chose the opposite, and the elderly person hauling me out of the blood-splattered wreck of a car to furiously berate me over choosing to kill someone with 70 years ahead of them rather than someone with maybe only five years left.
I notice that dogs came out more valued than criminals. This does not surprise me at all. Society needs someone to hate, someone who people can feel good about wanting to hurt, and criminals fit that bill. To the point where many react with disgust to the possibility of rehabilitation efforts - they'd rather see someone slowly tortured for decades at taxpayer expense than given the chance to reform.
Seriously, they included criminals in the list? How do they expect the car to identify criminals
Easy, make it a crime to strike a self driving car with your person, call it "Battery of a semi-sentient corporate person". Then, anyone who gets run over, I'm sorry, who viciously assaults this defenceless robot's tyres with their face is already a criminal and we're OK with saving them from a future sojourn at the prison farm.
"I notice that dogs came out more valued than criminals."
Cars will interrogate the Police National Computer, cross-reference with Facebook profile pictures and actively go in search of criminals to mow down.
Of course, you will be able to pay a premium for your car to prioritise taking you where you actually want to go over random acts of carmageddon. Be careful though - if your car doesn't make its criminal quota you may be marked as a bad citizen.
How would the AI know if the person is male or female, a child or a short person, their age, their body composition (fat/muscle), their social status (wtf has this got to do with anything?), their occupation (executive? doctor? Manual labour not important enough?)
Sounds like every human needs to be microchipped for the AI to scan them and those that are not are primary targets for "accidents".
there used to be a 'points' system for running things over. Run over roadkill, 1 point. Smack a mailbox or nick a fence, 2 points. Pedestrian NOT in a crosswalk, 10 points. Pedestrian in a crosswalk, 50 points. If the pedestrian is using a cane, add 25 points. And so on. OK I'm just rectally extrapolating all of this but it was a real 'thing' back in the day, 1960's-ish.
then you have the AI calculate the 'points' and go for max score!
The goal wasn't to provide something an AI could actually use, but find how humans attach values to different outcomes. I am surprised that a rich person is more valued, but other than that it looked like about what you'd expect.
Obviously a car can't tell if someone is a young fit millionaire or old drunk homeless person, but it could tell the difference between a child and an adult, or child and raccoon.
However, I believe the trolley problem is irrelevant. IMHO, the rule should ALWAYS be "don't leave the road if it means injuring people". People who are on the sidewalk should have an expectation of safety - they are where they're supposed to be. If that means that a self driving car with five occupants plows into another self driving car with five occupants and all die, versus killing one fat old homeless criminal on the sidewalk, so be it. People in vehicles implicitly accept some amount of risk with that mode of travel in exchange for getting where they are going faster, not getting rained on etc.
Assuming the self driving car will never be at fault (big assumption I know) then why not just hit whoever was in the wrong?
I'd be pretty annoyed if a self driving car mounted the pavement and hit me because a bunch of kids ran out into the road! Whereas if I'd run out into the road it'd be my own fault.
There's a lot to be said for Darwinism
I can't help thinking that the idea of a car that's programmed to possibly sacrifice its driver in certain circumstances could be abused. Say James Bond is driving along and a group of 10 henchman deliberately jump out in front of him so that James ends up wrapped around a lamp post.
I am glad brighter brains than mine are working on this.
The trolley problem is, simply put, a waste of time philosophical circle jerk that has NO bearing on the real world. The problem as originally stated is so contrived as to have no bearing on real world applications. The problem as applied to cars doesn't waren't bothering to think about. You really think humans, in their adrenaline fueled panic make ANY sort of decision when it comes to what to crash into? Or that most have any higher priority than saving themselves? Do we really WANT a self driving car to be making decisions on what to hit? The way I see it the priorities are:
1: avoid hitting something in the first place
2: Hit something non human over something human
3. hit something solid over something moving or weak that might give way
4. Slow down as much as possible before hitting anything
The decision on whether or not to hit a group or a single person isn't a decision a machine should make. Nor one a human will make in most cases.
I think that's kinda the point, at the moment we have no choice, what to hit is instinctual rather than a decision but the computer in a car can evaluate a shit load of variables a lot faster than us without considering its own health, we now do have a decision to make.
I do kinda agree though, the car should follow the rules of the road, if the decision is kill a few kids who jumped out in the road or kill the single elderly fat occupant of the vehicle who was minding their own business then the kids should get it. Darwinism at its best. If the car can make a decision that kills no-one then obviously it should do that however if it s choice between killing an "at fault" party/s or a "non-fault" party/s then those at fault should bite the dust.
"what to hit is instinctual"
I was going to point that out to. The results from this sort of experiment are likely to be entirely different to what happens out in the real world when you are driving and about to hit something. In particular I really doubt the "hit the fattie, not the fittie" result. I have noticed that when I was a lot fatter, walking around with my large backpack on, drivers of small cars tended to give way to me. Their thinking was likely "If I hit that guy, it's gonna do lots of damage to my car.". Hit a skinny person, meh you'll break them in two, and they'll bounce down the road, barely scratching my duco.
The trolley problem is, simply put, a waste of time philosophical circle jerk that has NO bearing on the real world.
Not in terms of solving the problem it appears to be addressing but it sure does reveal prejudices, and show how people value others, who are considered more worthy, and those less so.
What concerns me is when it is suggested there should be some official accepted and codified ranking of acceptable prejudices and worth.
What would Jesus do?
The fact a computer can gather and process millions of data points in nanoseconds, gives programmers the opportunity to resolve an incident in the best manner possible.
Not to do this, when resources are available to limit the damage a speeding car can do, would be tantamount to manslaughter.
I envisage most incidents with an intelligent systems managed accident would result in no fatalities at all.
Of the few accidents where death occurs, I hope the system had been given the ability to protect as much life as possible, and yes, if that means running into an idiot who mindlessly dashes across a motorway, rather than swerve to then risk the lives of the occupants of the twenty or so vehicles around you, then so be it.
You might want to be mindful that the UK's Highway Code, has for a very long time, discouraged motorists from taking avoiding action when animals run into the road, for the very same reason.
The trolley problem is, simply put, a waste of time philosophical circle jerk that has NO bearing on the real world. The problem as originally stated is so contrived as to have no bearing on real world applications.
So, should we put folate in flour or not? The trolley problem as a philosophical question is not about an actual trolley. It's for looking at how we make different value decisions. Should you take an action that reduces overall harm, even if that leads to harm that wouldn't occur if you hadn't acted? (Folate also just an example here, as I'm not sure harm due to supplementation is actually supported by evidence, but you can apply this to other public health questions, or social policy decisions.)
Surely not - you want something that will absorb the energy, not something that will push it all back into you.
Apparently the very worst thing (for the driver) to hit is a tree. Unlike a post that has been put there, it usually won't fall or bend over. Unlike another car it won't move along the road or crush a bit. It just sits there and doesn't move and you hit it with all your kinetic energy.
The answer to the trolley problem is not to make the decision. You shift the liability to someone else.
In the usual example where you are a train driver, the answer is to follow the company policy (whatever that may be) to the letter. All outcomes are bad and any other course of action will land you with the blame.
In the case of the self-driving cars, the manufacturer won't want liability, so they will provide settings which the owner will have to configure. Luckily the owner may have their own potential get out, The owner's insurance company, will almost certainly mandate preferred settings in order to minimize the potential payouts.
"In the case of the self-driving cars, the manufacturer won't want liability, so they will provide settings which the owner will have to configure."
The manufacturer wouldn't want the legal liability of allowing a user to chose, that just makes them jointly liable when someone gets killed.
Wrong... the answer to the trolley problem is "there is no right answer".
You have no capability to assess two options quickly and conclusively in a short time, nor does a computer.
Both options are bad and, in reality, most people won't blame you for "choosing" either but the fact is you won't get to choose - it's essentially random in any crisis situation. Even choosing between "hitting the fence and taking the pedestrians out" versus "not smacking into the oncoming HGV myself" is a no-win situation of which people take both options all the time or, again, the third option "AARGGGH!" and bouncing off the truck because you couldn't decide and ricocheting into the people anyway.
The fact is that any reasoning applied is largely arbitrary (why would you save rich people instead of poor people?), thus such reasoning is pretty unnecessary anyway.
The only options to decide are "do something" or "do nothing". And the answer should always be "do something", which should be "brake". Where you're steering when you brake is largely undetermined anyway - try to change that too much and you skid and make the situation worse.
All the computer should do is ask itself "do I need to stop?" And that's it. Anything else is going to cause as many deaths as it saves.
I think I have participated in the survey and from the statistical breakdown I got at the end, which included the mentioned things such as gender, age, social status, etc, I don't think it got the rules after which I was making the decisions:
- my own autonomous vehicle should never decide to kill me;
- humans matter more than animals;
- the vehicle should not swerve into people who didn't step in front of it in order to save those that did.
A more general form of the third rule actually makes the first one redundant (assuming that self driving cars obey the regulations): the car should not put people not who aren't violating traffic rules at risk in order to protect those who are.
I like this, nice and simple.
Would there be no situations though where you'd accept the vehicle should kill you? Extending your rules you could say that the car should kill it's occupants (who have put themselves at some kind of risk by choosing to travel at speed) ahead of pedestrians on the pavement who haven't.
" Extending your rules you could say that the car should kill it's occupants (who have put themselves at some kind of risk by choosing to travel at speed) ahead of pedestrians on the pavement who haven't."
Why would a self driving car ever hit pedestrians on the pavement? How would it get into that situation?
in order to avoid a tree that has suddently materialized in front of your car (*), leaving you no time to brake but just enough to swerve towards the pavement.
(*) yes, the tree is a joke, but there have been recent reports of idle youths in Morocco throwing big rocks from bridges onto approaching cars, so it's not entirely hypothetical.
This is a pointless question.
The self driving car should not be performing demographic analysis on everyone/everything it might hit, it should be doing its best not to hit *anything*.
And as I have said before: unless it's programmed with keeping its passenger(s) alive as its highest priority, I ain't getting in it...
The Guardian interviewed Andrew Chatham, a principal engineer on Google’s self-driving car project, who said the problem has little bearing on actual design.
“It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’,” he said. “You’re much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything. So it would need to be a pretty extreme situation before that becomes anything other than the correct answer.”
It would seem that Google feel they have to make the choice; brake or steer. My 13 year old Mondeo has ABS and as far as I can tell I can brake and steer at the same time.
Maybe Mr Chatham is dodging the question or maybe the development of self driving cars is still at an early stage. We should not expect too much in the near future in the area of moral development by Google at least. How others in the field such as Uber are getting to grips with the new concept of morality is anyone's guess.
The problem with brake and steer is you only get up to 1g acceleration in any direction, so by changing direction (i.e. sideways acceleration) you have to give up some along-track acceleration (i.e. braking).
In almost every case you really want to lose that forward momentum as that is what causes the damage ultimately, hence the priority to brake. Of course there are some situations where a swerve could avoid a collision with a small object (human, animal) that appears suddenly and within the minimum braking distance, hence this discussion of what to do if the consequences of such a swerve would be another collision (e.g. mounting the pavement, hitting another "class" of small objects, or hitting a vehicle coming the other way).
To some extent I agree with various commentards who say the AI should always stick to the road and no doubt would not get bought if it did not preserve its passengers. So basically the swerve-to-avoid should only be done if it is moving to another lane of the road that will not apparently cause a collision, otherwise those who walked out without attention have to face the best-case braking (which for an automated car is likely to be better than a human in terms of reaction times and willingness to reach anti-lock operation).
Ultimately this sort of morality debate is not what I worry about, it is the reliability of AI to actually drive correctly in the first place!
And this is the essence of what the real answer should be:
- Avoid all possible collisions outright.
- Mitigate collision as much as possible if not possible to avoid
It seems that while he did not discuss it directly, another possible principle is being discussed here - one of the key points of the trolley problem is not that you have to pick left or right - in fact you are already on one of those paths - you instead have the choice whether to change who you hit.
At which point, you have the option of actively choosing to run over someone, or ending up hitting someone in a terrible accident.
I don't know about you, but I wouldn't want to choose and so have that (or those) deaths on my hands, were I standing at the junction in the traditional problem, or programming an autonomous vehicle, and would focus on stopping or improving my sensory capability and keeping the vehicle within the limits of control given the outside situation.
Anything happening that cannot be avoided by doing so, e.g. a grand piano falling from the sky, sinkhole opening up or someone jumping off the curb? Its unavoidable and you react to it as best as you can given the emerging situation.
"At which point, you have the option of actively choosing to run over someone, or ending up hitting someone in a terrible accident."
Two things. First, most Trolley Problems change the value equation by making it between choosing to run over ONE person or letting it run over MANY people. Second, inaction is still an action, meaning if you're found to have been able to avert extra tragedy but didn't, you can get accused of your crime by inaction.
I'm not talking about inaction.
You still make every possible attempt to avoid the incident.
You just don't choose to kill something else.
That's the key here.
When we have a situation where an incident becomes unavoidable, do you make the decision to targetthat granny on the sidewalk?
As harsh as it may sound, if a kid ran out in front of a car within its breaking distance, its an accident. That car choosing to mount the pavement to hit Great Aunt Ethel instead? That's choosing to take a life.
And like I said, chance the scenario a bit and the words of Spock spring to mind: "the needs of the many outweigh the needs of the few". If a BUNCH of kids ran out in front of the car too close to brake, and the only way to avoid killing them all is to swerve and hit Granny on the sidewalk, then it becomes harder to argue, as there IS such a crime as "negligent manslaughter", times six or so would be pretty much the end for you. So taking action to reduce the body count could be seen as a necessary evil because the saving of the many lives would have to be weighed versus the taking of the few.
Such a scenario is actually spelled out in US military policy post-9/11. If a loaded passenger jet is confirmed to be making a suicide run at a crowded place, the military is instructed to shoot down the plane to prevent the greater disaster.
"the needs of the many outweigh the needs of the few"
And this is how you justify a Soylent Green society in our future.
Granny-on-the-sidewalk expects to be safe. The kids crossing the road have deliberately thrown themselves into harm's way, tough luck for them.
By the way, how do you propose to quantify this? If 5 kids cross the roads, how many would likely be hit if you can't stop? 2, 3? So it would be ok to sacrifice 2 grannies instead, but probably not 4?
And please give a lawful definition of "negligent manslaughter", nothing I've googled comes close to the situation you are describing.
If a loaded passenger jet is confirmed to be making a suicide run
Then the passengers are dead meat anyway, so the question becomes "do we allow them a few more minutes to make peace with themselves, or do we blow the plane now to save people on the ground?" Hardly a soul-crushing choice.
If God has decide you are going to get it, then the AI will do God's will, just like the dork-driver does now. Programming is futile.
The only answer is to work from home, shop on-line, and never go out. That way only the earthquake, forest fire, or dishwasher explosion can get you.
Looking at the example 'an autonomous vehicle experiences a sudden brake failure' - no, no it doesn't. The autonomous vehicle detects the error when started and refuses to drive. If it happens whilst driving, it uses engine braking to come to a stop.
Neither would the car be going fast enough to kill all the passengers, even if the pedestrians were crossing when they shouldn't. National limit roads do not have pedestrian crossings.. (yes, I have seen people crossing motorways on foot, if they get hit it's Darwin in action).
Real life crashes will be a lot more chaotic.
I refuse to believe humanity would selflessly sacrifice itself in the event of imminent death. Sound the horn, drop gears, the car has already stopped accelerating, and the pedestrians will most probably get out of the way.
I agree, and a lot of these problems seem to be invented by philosophers who inhabit a pure moral universe with no actual reality. Also, one thing I learned at university was never, ever accept a lift from a philosopher. An engineer, a biologist or a chemist, no problem. Mathematicians iffy, philosophers no way.
Mine was a joke too but if you have to signify it, itr possibly wasn't worth it in the first place.
However, the point is that philosophers and mathematicians are specialised in abstract thought, and you really don't want someone in charge of a car who is thinking about the trolley problem instead of looking where they are going.
Redesign our municipal spaces so that dangerous moving traffic and pedestrians are separated. In the same way that right now we don't have train drivers having to choose which people to swerve at. Also everything that moves and potentially can crash is under control of same superglomulous AI. Casualties reduced to small percentage of passenger miles. Job done.
You mean like town planning from around 1960?
Now widely known as "planning blight". Discriminating against the elderly and disabled, who can't just vault those railings to cross the road and so are lumped with a tedious walk to the next official crossing. Repeat a few times and you might as well just say the whole town is out of bounds.
It isn't the oldies that are the problem - it's the generation snowflake cretins walking across the road engrossed in their phones who just walk out and assume that everyone has ABS, stability control etc - and their *#&+ing parents who disclaim all responsibility for their mindless fuckwittery and spend the rest of their miserable pointless lives campaigning for a 15mph speed limit which is less safe than a 60mph because you spend most of your time in 2nd staring at the speedo like a Passchendaele veteran at the wall - completely oblivious to tweenage fuckwit the second...
Personally I think Alec Issigonis had the right idea with my car (although it's technically hidden behind the front bumper) put a nice sharp cutter bar at mid shin height so if the gormless wazzock survives faceplanting the bonnet (and crotch planting a formed steel grille), won't be doing it again since their ankles will be somewhere around the area of the kidney exhaust box (approx the region just under the back seat) when you stop.. ADO17 for the win.. Ditto the gene pool.
They should restart the public information films like they did when I was a kid (a particularly nasty one starred my cars cousin, the Morris 1800). "This is your brain on Candy Crush... This is your brain on Audi Crush" with the teenage girls brain smeared 35ft up the asphalt like a cognitive burnout, in full 4K colour with a voice over by David Attenborough. It, and others, should be shown every morning assembly for a month and any parents who whine about little Conifer having night terrors should be put on a car accident clean up team.
Plus alcohol testing for pedestrians - a criminal offence of walking without due care & attention (or using an iPhone with intent(to be a fuckwit)).
And finally and most importantly.
IF IT'S MY FAULT FOR HITTING SOMEONE (LIKE DELIBERATELY) THEN FINE - BUT IF IT ISN'T DON'T PENALISE ME FOR SOMETHING THAT IS NOT MY FAULT.
As to what to hit in the next few years - anyone in the UK with US citizenship should be priorities for the cull. Some of them will have voted for Donnie Dickwit after all and I think international terrorism, international eco terrorism and a level of sociopathic misogynistic twattery that has to be ingrained at a genetic level should be weeded out of our population and gene pool.
Firstly - if someone is shoved in front of a vehicle, be it the 10:15 to Swansea or the Wolseley (when there is no chance of stopping in either case) then the correct procedure is - find the shove-er and ascertain whether it was accidental or deliberate and proceed from that point. The driver is not to blame if someone is shoved in front of a vehicle so close that there is zero time to react. If someone is out in the middle of the road in clear sight having been shoved there, and I have time to double declutch into third and boot it and *then* I hit them while still accelerating (for a given value of the term) *then* I deserve the "gorillas in the mist" reboot..
As to scenario two. If I am in charge of a 1300kg vehicle capable of 90+ mph - there is NO FUCKING WAY I am relinquishing control to a computer driving system that has difficulty recognising cyclists, pedestrians, fast moving trees and virtually every other possible road hazard. That goes double when it's a 450hp SUV. The law will never catch up with this, there's so many many ways to trick the system and so many edge scenarios. So that doesn't apply. If you are happy to abrogate the responsibility for the safety of your loved ones and others to a computer on UK roads (unless every single cars converted overnight) then happy leaving the gene pool day. I might not be perfect, but the human brain is the best device to control a car or road vehicle - especially when everything else coming towards you at 60mph on the wrong side of the road is controlled by other human brains.
"I might not be perfect, but the human brain is the best device to control a car or road vehicle"
I beg to differ. Do you know just what percentage of traffic accidents are attributed to human error?
A: According to the NHTSA, it was 94% of all traffic accidents recorded, last they checked. Now WHO'S leaving the gene pool early?
the car detects one or more people it _might_ injure
- how about slowing down so the probability of any injury is < x%
the car detects one or more people it _will_ injure (for whatever reason)
- it should not have been going that fast - you (the human in the car) are about to commit GBH / manslaughter
the car does not detect one or more people it _will_ injure (for whatever reason)
- see B and as the car did not detect anyone, there is no decision tree to traverse.
queue bunfight over what x% is.
If I'm having an accident and swerve onto the pavement to avoid it killing a pedestrian in the process that would ordinarily be some kind of lesser offence or possibly I wouldn't even be changed.
I'd say to the policeman, I saw that thing coming at me and just reacted... I never imagined there could be that much blood.
OTOH If I said to the policeman, I saw that thing coming at me and saw there was a fat guy on the pavement and thought, better him than me, so I ran him over. Then the policeman could potentially arrest me for murder.
This is what R v Dudley and Stephens is about. You can't use necessity as a defence for murder so you can't choose to kill one person over another even if your own life is at stake.
Since the control software in the car is created by humans I'd have thought those humans would be the ones who would be prosecuted if the car chose to kill someone (NB "chose to kill" as "opposed to killed by accident")
At the end of the day though, if you get in a car you're the one choosing to take that risk. You are the one who should take the consequences of that risk rather than palming that risk off to those around you.
"You can't use necessity as a defence for murder so you can't choose to kill one person over another even if your own life is at stake."
You sure about that? I seem to recall various cases of killing in self-defence being basically let off. In fact, no need to google for it -- there'll be another one along in a few months, I expect.
Self defense is one thing. Choosing an alternate victim is another. If there is a person planning to shoot me, and I have a gun, I can shoot them in self defense. If the person is going to shoot me, and I grab another person and shove them in front of me, that's murder on my part. In the case of a crashing car, if given the choice of "die or that person dies", selecting the latter could easily be considered murder, because the victim you chose was not at fault. Hence the person on the sidewalk.
So its all well and good getting all excited over auto cars but cant we start with something more simple like trains?
Last few derailments and accidents i have seen in the news have been due to drivers going to fast or general failings of meat bags.
As trains can only go limited places (forward or backwards on the current track) surely we should play around and perfect the technology there first?
Might even see a train turn up on time!
"Last few derailments and accidents i have seen in the news have been due to drivers going to fast or general failings of meat bags."
The last two I saw where driverless trains derailing. Though you can always blame "general failings of meat bags", meat bags programmed the things.
Hmmm... unions and/or employment laws. You did see the strop over reducing the need for human door guards, eh? So if AI can't reliably detect an all doors shut situation; how will we program it to shut the doors on the fat pensioner and allow the young boy on?
Also can you recall London Taxi drivers losing it over Uber? This along with the endless diatribe of: "I'm a better driver than others - especially stupid robots" basically means it's only ever going to be a battle between selfish egos and common sense.
So this big survey is nonsense??
-No, our future sentient robot overlords will use it for selection during the annual slave pruning cycle. After all, we did all the difficult choosing, for them...
The Amtrack one last Christmas was caused by the fact they had not yet fitted the safety equipment to the track. Normally when a train enters a section of track it reads some data from the signals telling it the max speed for that section of track. Te company building the track was going to be paid a bonus if they ran trains on it before the end of 2017, so they started running trains on it before the safety equipment had been installed.
I wonder if they considered a choice between an <unknown pedestrian> and the occupants of the AI-driven car? E.g., if there's a bunch of pedestrians in the way, who can all be saved if the car drives off the cliff?
Mr Musk might struggle to sell a car that had the right answer to that dilemma.
As the car got ever closer to the point of impact, it hurriedly requested data on the potential targets it had identified.
A split second later and the results were in:-
Target three, Bob, had no history of making compensation claims, and didn't have a litigious legal type among their known associates. The others, a pure breed Chihuahua and a council owned wooden bench, were linked to owners that spent a lot of their time suing people.
More pointless crap. Give the idiots (98% of humanity) opportunity to voice their prejudices. Propagate the impression that an emergency system either could or should have an attempt made to use the "information" gathered. Be self-dleuding enough to imagine that this might be possible in some meaningful way.
The German constitutional court found that any weighing up of value of human life is illegal as it offends the dignity of the people concerned. I can buy into that, who can judge whether person A is “better” or “more valuable” than person B? Applies only to people of course, animals should lose every time imho.
For a self-driving car, I would take this as prohibiting any deliberate action that would lead to someone getting hit who otherwise would not have been hit. I.e. mitigate the collision you are heading for rather than causing a different collision.
However, the consequence of this principle would be, for example, not shooting down a hijacked plane containing one or more innocent passengers, even if the plane is heading for a crowded stadium. They made a TV courtroom drama based on this scenario.
The pure trolley problem sounds cool and interesting, but is a bit of intellectual masturbation, innit? At least in the context of self-driving cars.
I wouldn’t mind if it was a problem that really happened and had to be solved. But has it _ever_ happened? How much energy is going to be spent on it while other AI driving issues are not addressed? But, hey, plenty of jobs for self proclaimed AI ethicists recycled from philosophy PhDs ;-)
If the car can spare the processing cycles to ponder this, why would it not be able to avoid the issue in the first place? In fact, as anyone working in tech knows, keeping it simple is one of the best ways to have performant and effective systems that don't fail often. So, if you did manage to code in the trolley problem, what effect does that have on the car's ability to make decisions insplit seconds on more general conditions? Would it become less reliable, and less safe?
Not that there aren’t valid variations on it. Your car is about to hit a pedestrian crossing out of bounds. It can hit her. Or swerve you into a wall. You won’t get hurt, car will be totalled. I’d say car ought to get sacrificed, as long as you are not at serious risk. Now, what if it’s a dog? People do get into accidents for this stuff unlike trolley pretend issue.
The trolley problem certainly _does_ have large scale real life applications, such as allocating finite medical funding in national health car systems. The difference is it is a real problem and that the decision need not be made quickly.
But let’s not pretend it has much to do with real life self-driving AI’s current concerns.
Yeah, in the office we were saying: just hit the brake. Stop. Slow down.
It's like saying: you're nosing diving into the ground in a 747 - do you hit the hospital or the school?
Er, pro-tip: don't nose dive into the ground with a 747. It doesn't happen that often...
But it DOES happen. El Al Flight 1862 was a 747 that lost two engines, became unstable, and crashed into an apartment tower in Amsterdam.
Just because something doesn't happen often doesn't mean it doesn't happen at all. Sudden blindness (like a sudden fall or spray) can easily lead to a trolley problem or the like outside anyone's ability to control it.
How was El Al 1862 a trolley problem? As you said, the plane lost 2 engines, became uncontrollable and finally crashed in a tower. It doesn't appear that the pilots had to make a decision of whether to crash into the tower or the nearby school/hospital/prison/lawyers' office.
I was replying specifically to "Er, pro-tip: don't nose dive into the ground with a 747. It doesn't happen that often..."
It doesn't happen often, doesn't mean it doesn't happen at all. And IIRC the pilots were trying to turn back to Schipol when they lost full control.
Going back to the subject at hand, I reply thus: edge cases don't STAY edge cases. And Murphy CAN strike.
"Smart" cars (AVs) don't have the trolley problem. That's not how driving decisions are made. The trolley problem has almost no relation to any real-life decision.
The AV will be making the decision pretty much as we do: "let's avoid the first critical thing first, and deal the rest later". At most it will have a ranking of badness (fragile people, impact-absorbing vehicles)
It will (*should*) avoid the homeless woman walking the bicycle across the dark street, even if that means running into whatever is hidden behind the bush on the side of the road. Or the bus, which may be carrying 50 school children. It's certainly not going to try to model the collision and decide if they will all be killed, or barely notice the impact.
You hit or kill part or all of my family and I will sue you and your car manufacturer for everything I can get.
That's the response I would expect, I see it when the police have crashed into a people & cars during a car chase.
The whole situation Is we are supposedly better off because the vehicle will perceive objects, inanimate or animate, living or non-living, better than we can, plan more efficiently and avoid any accident, all whilst not talking on the mobile phone, dropping their cigarette between their legs etc
What we have is vehicles so far that cannot tell a truck is broken down in the left lane and runs up it's rear.
Plan: Do not hit anything, ever, until this is achieved stick to slow carts around a university or business park, .
The way the world is going with congestion, We will be lucky in future if the cars are moving more than 2 metres an hour anyway!
Should be be taking it a step further and trying to get people off the road completely,elevated walkways, more barriers.
I'd be more worried about flying cars(yes still waiting) zooming over your house, Now that is going to be a nightmare!
You thought hoons were bad doing burnouts or donuts down the road at 2am, Just wait till they are trying to do barrel rolls over your house. o.O
In reality, cars are not going to make these decisions - which are value judgement. They're simply going to try to avoid a collision with the object. If they swerve to avoid an animal and then a human also gets in the way and there is no solution to them avoiding the human, they'll hit them. If hazards appear at the exact same moment, it'll try to avoid both but if that's not possible, the laws of physics will dictate the outcome, not the car's logic.
Imagine the product liabity on anything else - The product effectively making a positive decision to kill someone because they're less valuable than someone else? - no way, Jose. It'll just be "avoid all collisions, until the laws of physics determine what you hit".
The "train tracks" problem is that, someone decided to make a train that hurdles along a track at speeds that can kill, and puts no barriers up to stop a child on one track and a grandma on the other. THEN offers control to a clueless philosophy student to ponder over. ;)
The actual answer is don't let the AI get in a situation like that. OR put enough explosives on the car that it instantly evaporates and everyone is safe... oh, perhaps enough LO2 to neutralize the explosion? They are tech people, they must have a solution... right?
And someone should've already invented a hypercomputer to solve the Halting Problem, is what you're saying.
What the Trolley Problem demonstrates is that, sometimes, there's no right or even satisfactory answer. But you can be forced into them anyway. Now, imagine a computer in such a scenario, and it can get pretty complicated.
"OR put enough explosives on the car that it instantly evaporates and everyone is safe..."
Except the driver and the passengers. Though instantly vaporising something that big tends to damage things around it a lot. That's the "nuke it from inside, the only way to be sure" option.
According to el reg Mercedes sorted this out ages ago :
So with in car problems answered I'm assuming a Merc / BMW / Audi key fob will just emit a signal and the car will aim for the group with the least number of owners in it (or most of the competitors, looking at you VW techs). If no signals are detected maybe they can find a way to run over both sets of losers to teach them a lesson.
Biting the hand that feeds IT © 1998–2019