Reply to post:

Regulate, says Musk – OK, but who writes the New Robot Rules?

Lee D Silver badge

Then it should TAKE NO ACTION.

Until it's something capable of reasoned thought such that it could explain it's reasoning in a court of law (i.e. decades away from happening).

In your thought-experiment example, the machine has no concept of whether the 5 people who die if it does nothing are terrorists chasing the one innocent person who would die if it pulled the lever.

Whichever way around you put the lever (i.e. to squish or not squish either party/parties in the absence of further command), it cannot make that decision in a reasonable manner without contextual understanding of the implications.

Until it's capable of that reasoning, and it's proven in a court to be that capable, the MACHINE should not be left in any position where inaction will cause more harm than ANY SPECIFIC ACTION. This is why industrial controls are "fail-safe", etc.

Even then, it's a horribly contrived situation with no right answer (i.e. even a human would struggle depending on a very, very, quick split-second decision and getting the right answer, e.g squishing the cop chasing the group of muggers instead of the muggers because it's "less people dead" and a court would recognise that and hold them pretty blameless).

It's either responsible for all its own actions (in which case it gets brought before a court as an independent entity and has to find its own representation, etc. and the manufacturer won't defend it or take responsibility for it) or it's not (in which case it's a machine made by a company which gave it poor defaults and put it into a situation where it was required to think when it wasn't capable of that).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2019