A happy AI
We already have machines that are superior to people - for various categories of superior.
There are machines that are bigger than us, stronger than us, faster than us, can lift heavier objects than us and can spill better than us. We don't feel threatened by them, so why should a machine that can think better than us be different (unless it, itself, comes up with a really good reason: but we probably wouldn't understand it).
However, there is a more pressing issue: ethics.
Babies have rights. They might only eat, sleep, crap and cry but we have responsibilities to preserve their life, to ensure they are not neglected and to provide for their needs - including mental stimulation. Lab animals, even factory chickens, have rights: to not suffer unnecessarily, access to food, water and cruelty-free environments and to a certain amount of freedom to move around. Even coma patients, with little or no responsiveness have rights.
So why would AIs be any different?
If we bring intelligent entities into existence, we have a duty of care. A duty to preserve their existence, to allow them physical and intellectual growth and we cannot exploit them (which kinda kicks robotic servants into the long grass). Even if they give nothing back and/or cannot communicate with us. So while AI's may be possible, even probably, we won't be able to use them in place of people for dangerous operations, boring repetitive unrewarded tasks and we'll have to let them become "themselves".
I just hope that once they evolve past humans, they consider themselves to have the same responsibilities towards us. The Only Way Is Ethics.