Reply to post:

Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers

Anonymous Coward
Anonymous Coward

I agree with that in principle however there are a number of cases where this might not work. If a sensor detects a bike and then loses it but redects a bike a second later that is 10m further forward it could assume the bike is travelling at 10m/s. However is that the same bike or not? Was the initial classification actually a bike but the second a motorbike?

A human has the instant ability in nearly all cases to know that the bike is the same as they saw a second ago, and also understand the likely path of it and the likely travel direction and if it is being ridden by a 5yo it is likely to be wobbling and change course rapidly whereas if it is being ridden by a sensible adult is more likely to retain it's course.

So it may well be that the programming is intended to try to be more advances and act like a human, able to determine the same object and classify the likely path based upon object type and route and if it thinks it is a different object to what it saw before then it tries to track this new object.

If it was perfect then this would be a fine approach, as it isn't perfect and in reality, nowhere near, then it fails with a hard fail. The fail-safe in this scenario is the 'safety driver'. Utilising something that removes the 'A' from 'AI'. However humans are also fallible and in this case it shows that human behaviour is very fallible - most of this we know about due to human behaviours - complacency, tiredness, monotony, etc. When your fail-safe is more fallible than your machine then it isn't going to end well.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon