Reply to post: Conclusion is wrong

How machine-learning code turns a mirror on its sexist, racist masters

FF22

Conclusion is wrong

Problem is: some attitudes or phenomena associated more closely or intensely with a race or a gender per se is not a proof of neither racism nor that of sexism - just like Paris being more closely associated with France than with England isn't the result of some form of nationalism either. It's just a pure fact and a valid observation. Which could very well be the case with any or all race or gender "stereotypes".

Only if they could prove that those associations were or are unsubstantiated, and are only the result of prejudice or discrimination - now, that could prove racism or sexism. But until they do that, the results do not actually mean and prove what they are trying to (falsely) conclude from them.

And do not even get me started about how the AI they were using (or any current "AI" for that matter) could possibly not have actually understood the true meaning of the textual resources it were fed to, and how it would have most likely classified even anti-racism and anti-sexism materials (which we, as humanity, have generated in large amounts in the last 50-60 years or so) as sexist or racist - at least in this analysis -, because simply and obviously these texts also carry heavy proximities in between of word (and generally an abundance of words), which are associated with sexism or racism, while the texts themselves being the antithesises of these ideas, and their pure existence in a large number the counterproof is these ideas being widespread and/or accepted in society.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon