All those things which are listed as limitations of Artificial Intelligence are also limitations of humans.
It may well be the case that an AI couldn't identify a non-repeating way of filling an infinite space with some shaped tiles but how many people could? One in a million? Less?
Two book pricing algorithms may have bid a book up to an absurd amount but remember that real people bid tulip bulbs up to an absurd amount.
On their motivation, yes it's true that a super-smart ideal computer wouldn't have any reason to want to take over the world. However, the general AI will most likely be a development from more restricted AIs and current things like image-recognition neural nets so there will be all sorts of artefacts in its mind from random chance or an oddity of the training sets it was fed. Again, exactly like humans who are trying to make sense of the modern world using a device which turned out to be useful at survival in the African savannah. For humans it means that we get strange things like religion and all manner of cognitive biases. For AI, who knows what the biases that emerge from its development will be.
Ultimately, our brains are machines. So worst case we use faster and faster computers to make more and more accurate simulations of our own brains and eventually computers are guaranteed to become conscious. How could they not be if they are produce the same outputs, via the same internal logic as a human brain from a given set of inputs? We know the end state that we want and we can train and evaluate an electronic brain much faster than we can train and evaluate a human so electronic brains will develop much faster than ours. They are guaranteed to reach our level and they are guaranteed to develop much faster so AI will rule the world. It's just a question of how long it takes.