Re: "Our AI systems must do what we want them to do"
"If you ask me, for AI to earn the 'I', it must be able to 'understand' and handle situations and objects of which it has no prior experience or specific rules. As humans, we do this by analysing parts that we recognise but haven't necessarily seen together and weigh up whether what we know about one object (e.g. the behaviour of a person) is more important that what we know about another object (e.g. the location). We make a 'judgement call'. Or, we try to understand a situation or object be analogy with another situation or object we are familiar with."
But like with the end of 2001, what happens when the AI, which would likely have less experience to draw from than an adult human, encounters something totally outside our realm of understanding? Indeed, what happens when WE encounter the same: something for which NOTHING in our experience and knowledge can prepare us.
Or on a similar note, paradoxical instructions. In our case, we have to take conflicting instructions on a case by case basis, determining (sometimes by intuition, something AIs would probably lack) which if any of the conflicting rules apply. Example: You're told to put stuff in the corner of a circular room (meaning no corners), and there's no one around to clarify. What do we expect an AI to do when it receives a paradoxical or conflicting instruction?