Re: Not sure this is so impressive, and this is dangerous...
Computers, even massive systems like Google's don't really have the chance to perform actions that effect the world around them.
I don't think this is correct. One trains a neural network by "rewarding" it (+n) for getting decisions right, and "penalising" it (-n) for getting them wrong. It has a build-in imperative to try to maximise its score. If it has any consciousness at all (I hope not), that consciousness is of a virtual environment of stimuli and chosen responses and consequences of those choices. (It would have to be a pretty darned smart virtual critter to start suspecting that it's in a virtual environment embedded in a greater reality. Human-equivalent, I'd hazard. )
A very simple life-form (an earthworm, say) can be trained to associate following certain unnatural signals with food, and others with a mild electrical shock. It'll learn to distinguish the one from the other. Just how is this different? If you attribute self-awareness to an earthworm but not to the neural network model, move down to a less sophisticated organism. It's possible to train an amoeba, even though it altogether lacks a nervous system!