stupid in their assessment of AI
Punched cards are not racist, but it didn't stop them being used by Nazi Germany to facilitate their ethnic identification of the population*.
The problem with technology being "neutral" is that if you are lazy or unscupulous, you can feed it with biased data and then claim the outcome is unbiased because it was created by a machine. Technology does not exist out of the context in which it is operated - you have to consider the whole system, including its social context, to understand how it really works.
In this case, all that is being claimed is that the training data is not adequately representative of the general population and consequently the recognisition results are, at best, patchy.
It's probably also likely that to recognise minority characteristics of any sort in a large population that you would need samples that significantly over-represent their occurrence in order to have equivalent levels of training data for those specific characteristics. That might simply be impossible to achieve in practice given that the technologists seem to find it hard to find data sets that even approach a true proportion of minority samples. That's doesn't make the technology racist, but it does mean it simply won't work. The problem then becomes a human one - we have technology that doesn't work but we want to recover our investment: let's find some people who'll buy it regardless and to hell with the consequences. That's when it becomes racist.
*I think a Godwin exception is permissible in this case.