back to article Good job, everyone. We're making AI just as tediously racist and sexist as ourselves

Artificial intelligence can inherit the same racial and gender biases as humans do when learning language, according to a paper published in Science. AI and machine learning are hot topics. Algorithms are becoming more advanced, providing us with better internet searching and recommendations to potentially help us diagnose …

  1. Crazy Operations Guy

    "be sure not to look at names or addresses on resumes"

    One of the things my employer did was to create a sort of pre-screening group in HR that would accept resumes, do cursory checks on them (Make sure we didn't fire them before, make sure they are real people, etc). The resume would then be handed to the hiring manger sans identifying information. The hiring managers would then send pack the ID numbers of the people they felt were qualified. HR would then schedule times for a first-round interview between the candidate and the hiring manager. The first interview would be performed over a text-based channel where no one's identity is known to the other party. Only after the candidate passes that round of interviews are their identities revealed to the hiring managers so that the second round, in-person interviews can be conducted.

    Not perfect, but we do have a much more diverse, and productive, work force than before.

    1. Anonymous Coward
      Anonymous Coward

      Re: "be sure not to look at names or addresses on resumes"

      I selflessly run a control for these sort of experiments by only hiring nubile hot chicks. Our workforce is pretty diverse too: blondes, brunettes, the odd redhead, etc... Some of them are even in their thirties; but we keep the oldsters in an office known as the 'haggery'.

      1. Anonymous Coward
        Anonymous Coward

        Re: "be sure not to look at names or addresses on resumes"

        "I selflessly run a control for these sort of experiments by only hiring nubile hot chicks"

        What do you use them for now that we have bean-to-cup coffee machines?

      2. Anonymous Coward
        Anonymous Coward

        Re: "be sure not to look at names or addresses on resumes"

        "...we keep the oldsters in an office known as the 'haggery'."

        Progressive. We just send them on an all expenses paid trip to Switzerland for 'processing'. Those that resist get a bag containing $10k in cash and a 1 day headstart in our annual team building exercises.

    2. Anonymous Coward
      Anonymous Coward

      Re: "be sure not to look at names or addresses on resumes"

      So HR enforces a text based first interview. Wow. That doesn't sound too efficient. Mind you I once interviewed with a firm with an HR department that used graphology to screen candidates.

      I have even more concerns about your HR team anonymizing CV's before the interviewers look at them. According to research a lot of CV's get rejected in 6 seconds and most are rejected in 30.

      1. Korev Silver badge

        Re: "be sure not to look at names or addresses on resumes"

        I think that HR anonymising the CVs and performing basic screening* are very sensible things to do. If they did things like this then they'd be at risk of appearing useful so they may not want to do it...

        *Basic screening only though! I had loads of problems with an HR guy not understanding the concept that I was more interested in getting the person with low qualifications with demonstrated potential than the person with a PhD who followed someone's script.

    3. Anonymous Coward
      Anonymous Coward

      Re: "be sure not to look at names or addresses on resumes"

      "Only after the candidate passes that round"

      That type of approach simply delays the inevitable in my experience. Managers will still hire who they want.

      It's generally not hard to bypass such systems by for instance matching dates / jobs titles / company names from Linked In to find profiles and filtering out the "unwanted characteristics".

      You also sometimes see this in a nepotistic fashion where minorities will employ only other similar minorities too... For instance I have experience of a certain housing department where nearly everyone was Nigerian, and technical areas in New York banks that that are nearly entirely Jewish, etc. etc.

      1. Sir Runcible Spoon
        Joke

        Re: "be sure not to look at names or addresses on resumes"

        Association != Prejudice (although you could argue it implies it of course).

        A lot depends on the actual questions. For example:

        Q - Which of the following names do you think is a footballer?

        A:Gary B:Deborah B:Constantinople

        If you answered

        A: Then you are prejudiced against female Romans

        B: You are prejudiced against football

        C: You are prejudiced against rational thought

  2. Anonymous Coward
    Anonymous Coward

    Well duh

    Microsoft's experience with Tay proved that pretty well. Racism, sexism and other undesirable -isms aren't genetic, you are socialized into it by those around you. An AI that learns in a similar way can't help but learn the same things - but it could have some programming to override what it learns in certain categories.

    That's a little harder to do with humans, and if you try they get all bent out of shape and start throwing around words like brainwashing.

    1. Dr Scrum Master

      Re: Well duh

      The Tay experiment said more about Twitter than societies in general.

      Wasn't there a Chinese bot launched on Weibo which was a lot more polite?

      1. Sir Runcible Spoon
        Coat

        Re: Well duh

        I reckon Eskimo's are prejudiced against sand just on the basis of their language, because they have lots of words for snow, but not sand. Q.E.D.

      2. O RLY

        Re: Well duh

        The Tay experiment said more about 4CHAN than societies in general.

  3. druck Silver badge
    Thumb Down

    Lets find something to be offended about

    It seems that the claim behind this story is that AI systems have been trained on datasets which reflect the majority culture in the countries which are developing them. Then this has been jumped on by those eager to take offence and mislabelled as racist. According to them every dataset has be to ethnically/gender/sexually balanced, or even adjusted to be positively discriminatory to certain groups they feel deserve special attention.

    tl;dr It's crap.

    1. hellsatan

      Re: Lets find something to be offended about

      Well erm no. The story indicates that AI systems are more likely to associate certain ethnicities (for example) with negative words, and white ethnicities with positive words. Thats not leaping to some politically correct conclusion, it is literally that the datasets being drawn on by the AI are leading the AI to the conclusion that women are bad at math, and black people are criminals.

      There is no particular imperative stated here to 'cleanse' the datasets, but it is certainly healthy to know the limitations of the information you are supplying to systems which you hope to improve/subvert/control the lives of the typical meatbag.

      1. P. Lee

        Re: Lets find something to be offended about

        >Well erm no. The story indicates that AI systems are more likely to associate certain ethnicities (for example) with negative words, and white ethnicities with positive words.

        Or just words.

        Google "baby pictures" and you only get white babies (in the first results). "Oh the horror!" cried the BBC.

        Maybe white parents are more obsessed with putting baby pictures on the 'net?

        More likely, the problem is that AI isn't real intelligence and is making assumptions based on "word proximity." I don't see many articles actually saying "women are bad at maths." What I do see is plenty of feminists writing articles decrying the idea that "women are bad at maths." Maybe the AI is being misled by all the complaining articles but isn't able to tell the difference between complaining about something which doesn't really exist, and the actual thing.

    2. allthecoolshortnamesweretaken

      Re: Lets find something to be offended about

      Hmm, whaddaya know - I think I just did.

    3. Infernoz Bronze badge
      Stop

      Re: Lets find something to be offended about

      The Red Pill, Alt-Right and r/K-type revelations are steadily demolishing the r-type sophistry supporting the bogus labels Racism and Sexism, and other r-type subversive BS.

      Other Race discrimination and Sex specific role assignment are completely natural and very practical K-type psychology, especially for a primarily K-type species like humans, but the decadent, threat blind, competition allergic, promiscuous r-type frackers are driven to subvert this!

      The harmful results of r-type subversion (including by wealth r-types) include a tottering (fake plenty) financial system, much bigger booms and busts, bloated governments which also try to enforce growing/harmful r-type BS, more and very wasteful wars to misdirect/consume K-types, and countries being destroyed by selfish/less-appealing women with too much money/power, who are not having enough babies or doing quality parenting, and parasitic primitive culture immigrants (invaders).

      1. Paul Crawford Silver badge

        Re: @ Infernoz

        Sounds like a relapse is occurring, please keep taking the dried frog pills.

      2. SolidSquid

        Re: Lets find something to be offended about

        So from what I can understand of this, racial and gender based discrimination is natural for humans is natural because the majority of humans fall into a particular category of psychology, and therefore it's good to discriminate. Also women are destroying the country by having too much power and not having enough babies, assisted by "parasitic primitive culture" immigrants.

        As far as the psychology goes, it looks like an attempt to map a largely abandoned ecological theory about how different species ensure survival, either having lots of offspring or investing a lot in their offspring, onto socio-political ideas and conspiracy theories about the suppression of "K-type humans" which don't actually relate to the raising of children (which seems to be all this theory dealt with)

  4. Anonymous Coward
    Anonymous Coward

    What would you expect

    when AI algorithms are written by young white male brogrammers. As my old grandmother was saying, the one who pays the fiddler gets to chose the tune.

    1. Robert Grant

      Re: What would you expect

      This is the least competent comment I have ever seen. It's not the algorithm. It may be the input data.

      1. Anonymous Coward
        Anonymous Coward

        @Robert Grant - Re: What would you expect

        Oh! Is it just the data ? Hmm.

    2. TheVogon

      Re: What would you expect

      "when AI algorithms are written by young white male brogrammers"

      What about now when they are written by cheap 3rd world programmers from say India, Russia or China?

  5. Kapudan-i Derya

    Very good research. Hope that problem will be fixed and AI will one day take charge of hiring workers on most places. While discrimination is probably decreasing for most groups, it is silently continue to increase for many people in throughout the world:-(

    1. Charlie Clark Silver badge

      Hope that problem will be fixed and AI will one day take charge of hiring workers on most places

      hm, I wonder who, or more likely what, they will hire: other computers are likely to make the best candidates!

  6. Destroy All Monsters Silver badge

    Lysenko on the phone

    Film at 11.

    Just recently - Racist babies! https://www.rt.com/news/384606-babies-race-skin-study/

    Time to reeducate the babies?

    Its' all US-centric bullshittery. The fever dream of attaining perfect egalitarian multicultural/multigendered utopia where differences don't exist. With the plan being that this will happen if everybody indeed pretends that differences don't exist. If need be, people will be forced to pretend that this is true under penalty of lawsuit.

    Cargo cultism.

    It has finally been transferred to statistics, which is what these algorithms are all about. Well, it happened a bit earlier, as I remember insurers were forced to pretend statistics related to race/gender/neighborhood didn't actually matter.

    Don't give up until the bell curve has uniform probability density!

    1. Anonymous Coward
      Anonymous Coward

      Re: Lysenko on the phone

      Yes, a lot of us think that it would be nice if people could just get along. But those of us that know much about American history know that the process is slow and miserable.

      The Italian and Irish immigrants weren't exactly welcomed when they got here either. They spent decades as minority groups before they were accepted as regular Americans. And at least there, they had the benefit that they were hard to physically distinguish from other "proper Americans". It's been even harder for any racial group that looks even remotely different.

      The situation keeps improving, but I'd be amazed if real equality is reached even in my lifetime.

      1. Steven Guenther

        Re: Lysenko on the phone

        The other option is the "Lathe of Heaven" solution. Genetically change all people to be gray.

        When the world is all one genotype there will be less discrimination. Except that people will always prefer those that have similar backgrounds.

        The thing that really worries the liberal types is that perhaps the AI is revealing real truths.

        Maybe women prefer arts and Japs are better at Maths.

        Maybe people are actually different.

    2. davenewman

      Re: Lysenko on the phone

      Simple solution. Expel all the immigrants and their descendants. Send them back to Europe. Keep America for the native Americans.

      1. Anonymous Coward
        Anonymous Coward

        Re: Keep America for the native Americans

        There are no real "Native" Americans - just different groups that immigrated at different times, even if some of those times were before modern written history. We have no way of truly knowing which of the ethic groups was "first", but genetic analysis does tell us that it wasn't any of the ethnic groups living on the North American continent at the time of the first European explorers.

        So the question then becomes where do you draw the line. Because I've got a European name and light skin you would eject me and my family -- even though I'm of mixed decent and can also trace lineage back into the Miami tribe? And why focus on European descendants? There are plenty of immigrants from every other inhabited continent as well.

        Your "Simple solution" is neither simple, nor is it a solution.

        1. Anonymous Coward
          Anonymous Coward

          Re: Keep America for the native Americans

          On the grounds that you could always start somewhere, what about third generation German immigrants who live on Pennsylvania Avenue, giving government jobs to relatives in a way that might make even a Kennedy blush, yet at the same time blaming foreigners for their country's problems? Can't be a huge cohort, must be easy to identify at least one.

  7. Anonymous Coward
    Anonymous Coward

    I knew nothing good would come of offering racks in pink or blue

    More seriously, its not that surprising that the people who cleanse datasets and make other design decisions around AI architecture impart their prejudices on the systems under their care.

  8. allthecoolshortnamesweretaken

    "Artificial intelligence can inherit the same racial and gender biases as humans do when learning language, according to a paper published in Science."

    The language we use both shapes the way we think and sets parameters on what we are able to think about.

    If you have the time, (re-)read 1984 keeping this in mind; I feel very strongly that this is the main point Orwell is making. After all, language is a writer's set of tools, and any craftsman who is serious about his profession cares for his tools and knows what he can and can't do with them. All that surveillance stuff in 1984 grabs your attention and it is vital to the story, but all the same at it's core it is trivial.

    1. Anonymous Coward
      Anonymous Coward

      Maybe you should take a break from your white male patriarchy and read a book by a PoC instead.

  9. Mage Silver badge
    Devil

    No surprise

    None of it is true general AI. It's all programmed by humans and will reflect the ethos of the company, management and programmer. And Marketing.

  10. Anonymous Coward
    Anonymous Coward

    What did you expect, exactly? The point of AI is to have it learn how reality works. Reality, unfortunately, right now, works that way. If the AI did not took notice sexism and racism, it would be a bug.

  11. Korev Silver badge
    Terminator

    Twitterati

    You can even feed the AI to have some fun

  12. Anonymous Coward
    Anonymous Coward

    They think it's the algorithm that causes that way of thinking and not the information fed into it?

    Sure let go with that because Tay was programmed to say all the stuff it said.

  13. Graham Dawson Silver badge

    IAT tests don't show implicit racism, but cognitive delay when dealing with the unfamiliar. Tests in hol l and, using all "white" imagery and names, showed that participants demonstrated the same apparent bias when presented with names in finnish.

    Does the name-race implicit association test measure racial prejudice? (van Ravenzwaaij D, van der Maas HL, Wagenmakers EJ.)

  14. Turbo Beholder
    Linux

    Your continued presumption to speak for "everyone" is noted.

    Also, this:

    http://www.theregister.co.uk/Print/2005/02/11/bofh_2005_episode_5/

  15. Naselus
  16. Anonymous Coward
    Anonymous Coward

    Been saying this for years

    I lost count of how many times I've posted about AIs on the reg alone with the phrase "Racism in, racism out."

    AIs are overwhelmingly trained by examples. An AI trained to predict if someone is guilty of a crime is trained on whether or not a sample group got convicted. If the sample group mostly had racist judges, the AI becomes racist the same way. Same is true of other biases.

    1. MSmith

      Re: Been saying this for years

      Why do you need racist judges? If you have one group that commits crimes at a much higher rate than another group, the computer should pick that up. The real difficulty is in getting the computer to ignore the differences it sees and publishes the 'correct' result, which isn't backed up by statistics. In the US, there are identifiable groups (let's call one group A) that are 3000x more likely to commit murder than other identifiable groups (lets call this one group B). However, the 'correct' answer is that group B is more dangerous to society and needs to be highly regulated. To suggest that group A is more dangerous is racist. How do you program a computer to come to those conclusions through 'learning'?

  17. Adair

    'AI' = just another hypegasm

    This misuse of language really rattles my rusty cage. It's just another example of marketing bullshit. It may be artificial, but it certainly isn't intelligent in any meaningful sense of that word which is remotely connected to standard usage.

    A series of algorithms, however sophisticated, is not intelligent, it's just another 'slave to the rules'. There is no possibility of creativity, lateral thinking, disobedience, or slacking off---all hallmarks of actual intelligence.

    So, get off humanity's lawn, you marketing shitheads! Go and live in some hellhole for a while, and if you survive come back when you've learned some humility, and do something useful and wonderful with your lives.

  18. Frumious Bandersnatch

    other uses of the data

    I've often thought that this sort of collation of data could be very useful for language learners.

    There are plenty of basic things that scanning corpora like this can turn up. You can have some basic stuff like collocations that existing in the target language (eg, "take" and "bath" form a collocation in English) and distinguish that sort of association from more conceptual linkages. For example, when "president" appears, you're likely to see more vocabulary related to countries, laws, government, debates and so on, as well as particular current events or issues. More or less what the article says about "spaghetti" appearing more often with "food" than "shoe".

    Besides being able to group new vocabulary and presenting related words to be learned together, in context, a computer-aided learning tool could use the data in a lot more ways, eg:

    • grade vocab (and reading material) by frequency, to avoid overloading the learner
    • build up a profile of what a person knows (and how well), including both vocab/grammar patterns and general knowledge (eg, "Trump" is a "president")
    • automatically generate all sorts of review/comprehension questions based on the material
    • be a lot more user-directed, letting them follow up areas or reading material that they're more interested in
    • maybe even generate synthetic reading/teaching/testing material using events/grammar/vocab/common knowledge that exists within the corpus (eg, simple sentences stories or scenarios)

    Maybe it's too much to expect a machine learning system to do all of this unsupervised, but still, you could have it at least generate different kinds of material and use crowd-sourcing to weed out errors or re-train the thing. Lots of ways to have a hybrid human/computer system.

    The other big use that I've often thought about is automatic classification of documents. I've got tons of PDF files downloaded from the net, but no actual filing system for them. One simple way of clustering similar documents together is to do a frequency analysis of the words in the document and then to get rid of all the most common words from the language (like "it", "for", "and", "the", etc.). The remaining top ten words, say, should help to give a very good idea about the topic of that document. Basic statistical clustering like this should help a lot to find relevant/related documents on a given topic, but there seems to be so much more that could be done with AI/machine learning techniques.

  19. RonWheeler

    Nobody spot the elephant?

    Lots of smart people confused why a computer sees the elephant?

  20. Charlie Clark Silver badge

    This counts as journalism?

    A previous experiment showed that people with European American names were 50 per cent more likely to get an interview from a job application.…

    Link or at least name of the study and country where this happened (I assume the USA but the USA isn't the world).

    Machine learning will be skewed by the datasets and the corrections it receives. Seeing as these will both be done by humans, adopting the human bias is unavoidable. Think of training systems to recognise images of cute animals… So, the real question is whether the systems are being used appropriately?

    Anyway, I'm happy with a certain degree of bias as long as it stops Amazon et al. trying to sell me what I've just bought. In many commercial applications (think film or music recommendations) this kind of bias is likely to be welcomed by the customers, who, when it comes to comestibles, almost always prefer "more of the same".

    1. Anonymous Coward
      Anonymous Coward

      Re: This counts as journalism?

      "A previous experiment showed that people with European American names were 50 per cent more likely to get an interview from a job application.…"

      All these people who refuse to integrate into multicultural Britain with names like Smith, Jones and Brown - it makes it so easy just to file their CVs in the bin.....

  21. ChrisPv

    American English

    I see "human" used here, when datasets used are in English, and if I understood correctly, American in origin.

    The hypothesis that in other languages the results will be the same is tempting, but research is limited to English IMO.

  22. Stevie

    Bah!

    So, to summarize:

    Garbage In, Garbage Out.

  23. Anonymous Coward
    Trollface

    Great! Just what we need further down the pipeline..

    SJW AI.

    All Ai is Raciest,

    All Ai is Sexist,

    Ai Need to check their programmed privilege, and reminded about it at all times.

    Makes that whole Internet of things even more disconcerting, not only had it the potential to spy on you, now SJW Ai will be monitoring you, and reporting you to the nearest education camp if you say something that might be considered 'ist.

    "Computer, I'm out of Milk, please put Milk on the shopping list"

    "Im sorry Dave, I cant Do that!"

    "why not?"

    "Milk is a symbol of white supremacy"

    "oh ffs!, I just want cereal for breakfast!"

    Anyone remember when all Ai just wanted to destroy Mankind.. good times..

  24. Nimby
    Facepalm

    or to pick as many male as female resumes

    Joanna Bryson represents exactly what is wrong with the world. Probability is not discrimination, it's math. Regardless of the recipe, if 100 mushrooms submit resumes, 10 are Maitake, and of those only 2 are fully qualified, then when picking the 20 most qualified resumes should only result in at most 2 Maitake being chosen. Messing with the algorithm to put all 10 Maitake into the finals even though 8 are unqualified, just to appease the mushroomist view that genus should be evenly balanced in all things, is WRONG. Any forced influence, no matter how "well-intentioned" is Bad Algorithm. (And is, by definition, mushroomism, as this is exactly a willful act of artificial bias and undue influence.) Statistics are not evil just because you don't like what they show. Instead of writing a bad algorithm to force undue influence, maybe you should look at why so few Maitake submitted a resume in the first place, and if that even represents any real problem.

    1. Anonymous Coward
      Coat

      more than ever

      Campbell's law: "The more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

      Okrent's law: "The pursuit of balance can create imbalance because sometimes something is true."

      Dorothy Sayers: "A facility for quotation covers the absence of original thought." :D

      Anyway, yours is probably the safest, sanest position anyone can take. If I can add anything, no matter how unoriginal:

      Affirmative action equals discrimination. Attempts to consciously and forcefully reach a happy medium where things are actually "fair" will overshoot and make things unfair in different ways that weren't deserved by anyone or anticipated by those who needed to. When you directly oppose something (about society that you want to change), the very nature of the thing is encoded into your opposition so that its very existence as a problem-- real or imagined-- is preserved, guaranteed, and perpetuated by yourself. As if that last one needed proving, probably any SJW who reads that will be disgusted enough to want to either save me from my opinions or (far more probably) double down on their efforts to save society from me and everyone like me. It can't be helped until I just stop thinking out loud into a keyboard and maybe by that time, individuality will have been properly redefined in a proper Orwellian way. If they want to do that with their world, they can HAVE it. I'm not going to save it from them just by rambling in a dead thread but you better recognize what Nimby said makes a ridiculous lot of sense.

  25. Anonymous Coward
    Anonymous Coward

    Nothing wrong being human

    When someone else comes along who is better than a human being, then we live up to their standards. But if being racist and sexist is ingrained into human biology like our thumbs are to our hands, no need to snip away what makes us human. If being racist and sexist makes an AI the best, an AI knows whats best. If you wanna admit the truth and realize racism is present in culture not humanity then we can start making some changes. You wont though.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like