Re: Working out what AI is thinking and why
Your argument is valid. Only one minor correction: it's not just about AI. It's also true of humans. "The most powerful computing resources available" (your words) but it can be a bit of a bugger figuring out why they did what they did. Even when you ask them, they may not know.
According to one school of thought, we don't know why we do things. We do something because one part of our neural net decides to and then our consciousness comes up with a reason why we did it. This is particularly true of children: remember when you did something wrong, adults asked you why and your answer was that you didn't know? The adults refused to believe you (they must have forgotten similar events in their own childhoods), so kept pressing until you invented a reason. As you grew older, you invented reasons more or less automatically (because you had become used to having to explain your actions) and eventually adopted the delusion that those invented stories were why you did something. Neurologists have shown that the bits of the brain involved in conscious thought come into play after the bits responsible for performing actions.
So yeah, to quote you again, "transparency isn't their strong point." Or ours. Unlike AIs, we are capable of inventing explanations, but invention is all that it is. Informed speculation about our own actions, with more knowledge of internal state to go on, but it's still speculation not fact.
The best we'll be able to do with AI is keep a record of all the inputs. That will at least tell us if the AI is at fault in a given situation and then we'll have to find training that eliminates the error.