Re: Some seriously flawed thinking there...
>> “If we actually succeeded in creating machines that were intelligent, how would we ensure that they would be controlled and friendly?
> By definition we couldn't. To be intelligent, an entity needs to be able to make its own conclusions and decide its own actions.
By definition? That implies we're agreed on a definition, which we're not.
But let's define an AI as "Something capable of creating new knowledge, creating new ideas and ways of testing them, and thereby amplifying the human ability to research." Even then, why does it need the ability to decide its *own* actions? Couldn't it just issue a list of instructions? So, if it decided some particular theory deserved investigating, it would explain useful ways to do so.
Couldn't we *use* such an intellgence, without giving it any physical ability... a pure, virtual intelligence? But then, how to firewall the damn thing...... Can knowledge be firewalled?