Re: Missing the obvious
Would *I*, as a qualified driver, be allowed to operate a remote-controlled car on a public road, under normal driving conditions, entirely via remote control?
I believe the answer is no.
So why does an "AI" bot that's untested get to have that as the only safety measure if the electronics fail?
And, if I did do that, and it killed someone, would I be liable for not being in control of it - no matter if I couldn't override the controls or compensate even though I can see the accident coming - , or would the AI creator be held liable?
It's a stupid idea solved by simple testing procedures. Allow them with a supervisory driver inside the vehicle. When they prove themselves (but never needing any intervention), then have them do several thousand miles unaided (but with remote vehicle control). Then ramp up slowly.
But, to be honest, the REALLY stupid thing is the venue. When the devices prove themselves in a simulated environment (i.e. looks like a road, works like a road, not some on-screen fake 3D stuff) off public roads, and then on public roads but at slow speeds (e.g. moped speeds only), and then on motorways, and then in extreme conditions, etc. THEN you can authorise a full trial on public roads with no restrictions on what road/weather they can drive in. Not before.
I would be more than happy to let automated cars on our roads today. With a 20mph limit, not allowed on motorways, human driver behind the wheel. If they can't cope with that without interventions, they shouldn't be allowed anywhere NEAR 70mph, a motorway or the public, and certainly not unsupervised.
They're learner drivers, at best. Taking the instructor with dual controls out of the car before they've passed their test, or at very minimum a qualified driver happy to take the blame at all times, is illegal for a human at that stage, why should an untested technology leapfrog that requirement? Not to mention motorway driving, licenses with points on which they can lose, etc.