Re: AI is an approcah, not an outcome
even the famous ‘Turing Test’ is set in a social context - and computers have no idea about that, and are nowhere near achieving it
There are chatbots which have beaten human judges in Imitation Game (aka "Turing Test") challenges. Those challenges are inevitably limited - they have time limits, at least - and given, say, several months to interact with members of an Imitation Game panel the judges would probably eventually distinguish the participants correctly, at least with decent probability. But under the terms in which those contests were conducted, the 'bots won.
People who actually work in AI / ML are not particularly interested in those results, because they're not particularly interesting. Turing didn't intend for people to hold real Imitation Game events. It's a philosophical thought experiment.
Basically, it's an argument for the sort of view of intelligence that might emerge from the American pragmatist school of epistemology: we know an entity X is a member of class Y because it exhibits the visible attributes of members of that class. We treat things as black boxes and concern ourselves with how they interact with the world.
It's interesting to contrast Turing's position with John Searle's in his Chinese Room argument, which is essentially a logical-positivist and phenomenological one. Searle says, in effect, "I'm not sure exactly what I mean by 'thinking', but this description of what one approach to AI is doing isn't it". (Logical positivism asks "what do we mean by the term 'X'?", and phenomenology asks "what are we doing in our minds when we do Y?".) So Searle does want us to consider what's happening in the box, and whether we think it might be similar to what seems to happen in our minds.
It's mildly ironic that the Englishman Turing leaned toward an American philosophical school, while the American Searle toward one most closely associated with the UK. But then we hope our better thinkers will reach outside of whatever's popular in their own playgrounds.
Robert French, among others, has pointed out (in a piece in CACM some years ago) why the Imitation Game isn't a useful practical test of machine intelligence. Appealing to it at this point in the game doesn't really help, except as a touchstone to illuminate your philosophical position.
Aleksander's definition of intelligence which you summarized above won't satisfy everyone, but it's one that can be argued for. I don't have any great objection to it myself, though I'm not ready to endorse it either. It's interesting to note that it combines logical-positivist and phenomenological criteria (items 1-5, for the most part) with a pragmatic one (6).