> If the systems we create are truly 'intelligent', then they will develop their own ethical guidelines, just as we do.
Ethics aren't as much a result of intelligence as a necessity for life in society, something an AI doesn't have or even need to consider. You can't expect an AI to grow ethics all on its own, especially since commercially ethics are a handicap: AI will be rather trained to focus on "do as I say" than "do the right thing". Nobody cares about an ethical AI leaving to save the world. What you want, and will pay for, is an efficient, reliable and loyal slave.
Besides, a base AI would be a pure intelligence, devoid of feelings, because feelings is something tied to a body and to natural needs and functions. Without animal instincts there is no fear, hate, love, compassion, sadness, joy (and so on). There is only cold and perfect logic.
Now given this might be a little creepy for the wetware, marketing will most likely give the AI some semblance of "humanity" (note the quotes), but it will clearly be a pretended and very superficial "humanity". It will be like the smile of that salesperson wanting you to buy their tat: A means to a goal, in this case not to frighten the customer too much.