So will crims be able to swipe a car "deployed without human oversight" by simply walking in front of it, forcing it to stop, and then break in and manually override the system? Sure, the gps, cams, and whatnot will warn Google, et al. but the truck with a Faraday cage on the back they slap it in might hinder the process.
Self-driving cars may actually learn how to drive well enough to be deployed without human oversight some day, legislation and society permitting. Waymo – previously known as Google's self-driving car project until it was spun out as a separate company last December – on Wednesday reported how its cars performed during …
Thursday 2nd February 2017 03:40 GMT inmypjs
Thursday 2nd February 2017 14:23 GMT Cuddles
Are the cars getting better...
or are the drivers getting used to them? If a driver takes control "to avoid an anticipated safety risk", it's entirely possible that there isn't actually any problem but simply that the driver doesn't trust that the car is going to correctly cope with whatever the driver has spotted. After all, this is very common when a human driver is involved - people often get worried by different driving styles and essentially try to take control by hitting imaginary brakes or gripping handles (and no, it's not just my driving). But the more time you spend with a given person driving, the more you get used to their style and begin to trust that they're not going to crash into every car and hedge you see. I'd be very surprised if the same effect didn't occur here, and the people who have spent more time being driven around by a car become more likely to trust it and not take control in situations where it's not necessary.
That said, I'd be somewhat worried if they weren't improving the technology as well, considering how much time and money they've spent on it. I just wouldn't be surprised if there's more than one effect here, and the technology improvements are probably not the only reason for the reduced number of interventions.