If there's one thing that I've gathered
from reading the various pro/anti arguments above, it's that even people cannot decide on the ethical standards that should apply in all this. Or how a particular scenario should be evaluated, if you will.
How can we expect AI to improve this situation, especially given that only the "pro" side will provide the training data?
Better to have everyone agree to some sort of normative standard of ethics before things get out of hand. Asimov's three laws seem uncontroversial enough.