Re: intelligence isn't easy to replicate
I know of many projects to bring empathy to artificial persons. Be those artificial persons virtual, robotic or both. As a matter of fact, empathy, sympathy and compassion probably get more money than anything except "how to move around autonomously" and "how to kill efficiently and accurately".
Artificial sympathy is pretty clear cut: the ability to recognize the emotions of others has uses for everything from detecting criminal intent to understanding what human persons are attempting to communicate. Here, there is great interest in the robotic care industry.
Empathy is seen not only as a useful tool in the robotic care industry, but it is seen as useful in attempting to build more capable virtual assistants, search bots and more. If you not only understand what the human person is attempting to communicate, but can have those emotions equally bias your choices then you can understand intent even more accurately than with sympathy.
Artificial compassion is farther out, but is seen as important for artificial governance. There is great interest in answering "quis custodiet ipsos custodes" with "robots". Specially in roles such as ombudsbot or as an adjunct to a highly politicized investigation (say oversight of police or the judiciary). In these situations cold logic isn't enough; compassion is absolutely required.
Now a lot of people will start to scream about robots running the world at this point, but I don't think that's the intention. Most projects I've seen regarding artificial governance are not about putting a decision to an artificial person and accepting their judgement, but asking the artificial person to render not only judgement, but rationale behind that judgement. A clear chain of "based on these pre-programmed factors, this scoring from these detected emotions, this bias weighting, etc" it seems the best thing to do is Y.
In this manner, once a decent AI is evolved, judgements can be modeled by altering the input biases. Do we, as a society, believe in any absolutes regarding compassion, punishment, rehabilitation, etc. and so forth? What does the law say? What does legal precedent say about exceptions due to compassion?
Lots of people want these bots in order to model elections. Others as a means to better understand how to manipulate groups of people. If you change one thing, how does that affect their judgement? Etc.
The technology behind artificial sympathy, empathy and compassion have many uses, both great and terrible.
Sadly, as we have no means of updating humans with compassion, the most terrible uses are likely to be the first tried, long - long - before the rise of any machines against us.