One of the subjects they covered on my artificial intelligence module at uni was the concept of a microworld. A microworld is some sort of simplified reality built for testing and developing AI systems that are designed to interact with it. It could be completely virtual, or it could be a real world environment with all the "sharp corners" taken off (A microworld for a pattern-recognition AI could for example be tested in a microworld with "sphere", "cube", "cylinder", etc but never have to deal with things like "person", "sofa", "kumquat", "squirrel" and so on).
The problem with microworlds is that systems developed in them tended to work brilliantly in the microworld, but when you tried to graduate those systems to more complex environments their performance tended to fall apart. This phenomenon became known as the Microworld Ghetto, if your AI was born there it would never escape the microworld. The complexities of the real world aren't something you can "bolt on" to an AI system, they have to be engineered in from the start.
I think Google's approach of testing in the real world with a human in a position to pull the plug in an emergency is far more likely to yield a successful outcome than building a driverless car microworld will. The photo of the track doesn't show any buildings or blind crests or poorly laid-out road markings or worn road markings or any other kind of obstruction to vision that you'd expect to encounter on a daily basis. Those are the things a driverless car needs to be able to cope with to work in the real world.