Re: AI everywhere, oh my
So-called deep-learning systems (such as AlphaGo) most certainly don't have explicit decision-making algorithms programmed into them: it is more "meta" than that. Rather, they have an explicitly programmed learning algorithm. This algorithm is trained on vast data sets exemplifying the target "intelligent behaviour" that the system is targeted to achieve. So AlphaGo, for instance, would have been trained on large archives of (mostly human*) Go games. Training (generally a very lengthy process) configures the internal information-processing logic of the system in such a manner as to facilitate generalisation, so that it can respond well in situations it has not encountered during training, but also avoiding over-generalisation, which might lead to sub-optimal responses. During execution, the trained system then makes "decisions" which in general are utterly opaque to humans, including the designer of the learning algorithm.
To achieve any kind of performance for hard real-world tasks (and playing Go well, for example, is hard!), huge data-processing resources are required. While the theoretical basis for design of such learning algorithms goes back at least to the 80's, it is only much more recently that sufficient data processing capacity to exploit their potential has become readily available - hence the recent deep-learning buzz/hype.
So no, it is not magic, but it is rather clever, and based on sound maths. Human expertise typically enters the picture in artful (and often biologically-inspired) design of domain-specific problem encoding, such as the convolutional networks which have proved highly effective in visual processing, and in tailoring the architecture of the learning system to the problem at hand.
Clearly, such learning systems are, at least for now, limited to highly specific problem domains (playing an - albeit difficult - game, maybe even controlling an autonomous vehicle). By comparison, us bio-machines have the benefit of several billions of years of evolutionary history to configure our learning systems (and indeed our learning-to-learn systems), as well as lifetimes of supervised and unsupervised training. So no, I certainly wouldn't expect human-like intelligence to emerge from the deep-learning world anytime soon.
In particular, I suspect that this won't come about as long as AI systems are disembodied "brains-in-a-jar" with a limited interface to the outside world. After all, natural intelligence evolved in bodies embedded in, and interacting with, the physical - and social - world. When (if?) AI achieves the level of sophistication and versatility of, let's say, a housefly I will be truly impressed - but I'm not holding my breath.
Nevertheless, I think it's unfair and short-sighted to sneer at the achievements of AI to date. My suspicion is that current technologies such as deep-learning may turn out to be crucial building-blocks in more sophisticated AI technologies of the future.
* I think I read somewhere (but may be wrong) that AlphaGo also trains on game data it generates itself...