Reply to post:

I love you. I will kill you! I want to make love to you: The evolution of AI in pop culture

Anonymous Coward
Anonymous Coward

"Artificial intelligence is never a threat of itself [...]"

If an AI decision is able to inflict damage in any way - then there is always a chance of unintended outcomes or collateral damage. That is why Asimov's Laws stipulate overriding contingencies as a catch-all.

There is the classic dilemma of the runaway train heading for a broken bridge and certain destruction. If it is diverted onto a spur then the passengers will be saved. However - that guarantees that a man in the spur will be killed. Does the machine choose the greater good - or avoid a direct action that would deliberately kill the man on the spur?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2019