One from DARPA
In the early 2000s, Mike Sellers was working on social AI agents for DARPA. During one simulation, two AI agents named Adam and Eve were given a few basic skills. They knew how to eat, but not what to eat. When they tried to eat apples from a tree, they felt happy. When they tried to eat wood from the same tree they didn't get any reward.
So far so good, right? Things started going haywire when another AI agent, Stan, was introduced. Adam and Eve learned associatively. Because Stan was hanging around when they were eating apples, the agents learned to associate Stan with both eating and the feeling of happiness.
Guess what happened next?
"At the time it was pretty horrifying as we realized what had happened," writes Sellers. "In this AI architecture, we tried to put as few constraints on behaviors as possible... but we did put in a firm no cannibalism restriction after that: no matter how hungry they got, they would never eat each other again."