Running off the invisible cliff
"As the ex-Google machine learning expert Andrew Ng has sensibly pointed out, fearing a rise of killer robots is like worrying about overpopulation on Mars. You have to get there first."
I suspect that part of Musk's caution is based on the idea that we are unlikely to know exactly when we "get there". For all I know, we're already there, and the genie simply won't be put back in the bottle.
We have certainly already built systems that are inimical to human interests and extremely difficult to dismantle, insofar as they are deeply embedded in social, economic, and political structures and will require little less than a revolution to undo. Maybe Musk was doing the Hari Seldon-esque thing and simply playing out forces over a 20-50 year span; finding that these forces and systems conspire to the inevitable development of an AI that is both uncontrollable and hostile to (some) human life.
In this matter, I'll trust Musk over Zuckerberg.