back to article Sorry, Dave, I can't code that: AI's prejudice problem

Bureaucrats don’t just come in uniforms and peaked caps. They come in 1U racks, too. They’ll deny you credit, charge you more for paperclips that someone in another neighbourhood, and maybe even stop you getting out of jail. Algorithms are calling the shots these days, and they may not be as impartial as you thought. To some, …

  1. Doctor Syntax Silver badge

    Getting good metrics is hard.

    Garbage in, garbage out.

    One of the stories linked makes interesting reading: https://www.washingtonpost.com/local/education/creative--motivating-and-fired/2012/02/04/gIQAwzZpvR_story.html

    Basically the value-added "measurements" conflicted with other assessments but were allowed to dominate the assessment. Digging in a little deeper it turns out that the start of year measurements weren't the school's own, they were someone else's. Right there is a prerequisite for using numerical approaches - you've got to be sure you can rely on the data.

    The article ends “Teaching is an art,” she said. “There are so many things to improve on.”

    Measuring is also an art.

    1. John Smith 19 Gold badge
      Unhappy

      "Garbage in, garbage out."

      And for those "data poor" neigbourhoods were people don't buy every PoS IoT to report on their lifestyle that translates as "Nothing in, garbage out."

  2. Anonymous Coward
    Anonymous Coward

    AI main issue:

    trying to predict the future using the past.

    An "average" past, also. Without that human feature called "intuition" - which is the capability to understand when things are different from what they look.

    1. P. Lee

      Re: AI main issue:

      >trying to predict the future using the past.

      Worse. Humans can come to recognise bias when its bad because we have morality.

      The whole purpose of AI is to implement bias and it has no morality.

      AI isn't intelligence, it is merely statistical analysis. Complicated stats, but still just stats. It can make correlations, but not assess causation. It is incredibly dumb.

      Stupid and morality-free. It is only useful when you really don't care too much about the outcome.

      If AI is used to determine if you get parole, it is because the judicial system doesn't really care about the outcome, only that the process is cheap. I think its fairly easy to assess the morality of those who commissioned that.

      What happens when everyone moves to AI and we no longer have humans doing the job? Where do you get your training data? No-one can tell how decisions are made and there's no way of measuring quality of the output. How do you know if someone has found an algorithmic flaw and is exploiting it? How do you catch the outliers?

      AI has a place, but there are dangers. One that immediately comes to mind is that we try to do too much and it pushes policy to places we shouldn't go. Why check only the fingerprints of criminals when you can check everyone's? What happens when systems trained on a little data from California are exported to Kenya? Does anyone know? What happens if you go back the other way - take your data from Kenya and use it in California?

      Part of the problem is that the vendors hawking AI systems have no vested interest in their correct use. Problems are compounded when those buying and using such systems don't have too much interest in the outcome either, just as long as an outcome is reached and more cheaply than a human could do it.

      1. Anonymous Coward
        Anonymous Coward

        Re: AI main issue:

        Good post apart from the over-stated "The whole purpose of AI is to implement bias..."

        1. amanfromMars 1 Silver badge

          Re: AI main issue: @Anonymous Coward

          The whole purpose of AI is to implement bias...

          Is that the same as generate profit and inequitable advantage for that is what most activities in the sector/vector are developed to lead in/with? It is the normal way of all things, is it not?

          Or has there been some stealthy revolutionary change that is not being discussed for fear of fancy fantasy capitalist equity markets collapse?

  3. John H Woods Silver badge

    Transparency...

    ... is not always possible, even with the best intentions.

    ... with simple statistics, only the STEMmers will understand

    ... with complex statistics, only statisticians will understand

    ... with AI, nobody will understand.

    Someone can tell you the architecture of their AI, and all the weights of the trained network, but it doesn't tell you why it makes any particular decision. Perhaps we have to wait until AI is conscious enough to explain itself. I'm not hopeful, though: as my late father used to say, 95% of human rationality is used for providing convincing explanations for decisions they have already made on gut feel.

    1. Version 1.0 Silver badge

      Re: Transparency...

      The problem is that statistics can be screwed by anyone with malicious intent. It just takes twisting a few words to change the meaning for Facebook and news reporters.

      Cancer Rates Increase by 100% - that's scary!

      But your chance was only 0.001% before and now it's 0.002% - who would care about that?

      1. Anonymous Coward
        Anonymous Coward

        "The problem is that statistics can be screwed"

        Even perfect statistics are useless when you try to predict a single event, without having a clear enough comprehension of the mechanism underlying the event - a theory (and its validation).

        So you just look at how a new element fits inside the statistical distribution, and where it places, without understanding why, and you assume that's true just because your own algorithm says so.

        That's something that comes from the "science" of economics, which is still unable to deliver real theories (and their proofs), and so builds on statistical data, and delivers wild and often wrong predictions, but most pretend it works because they make money on it.

        If it works so well, I don't understand how bookmakers make so much money betting on the fact people won't be able to predict single events with the required accuracy... despite all the stats available.

        1. Brewster's Angle Grinder Silver badge

          Re: "The problem is that statistics can be screwed"

          "Even perfect statistics are useless when you try to predict a single event, without having a clear enough comprehension of the mechanism underlying the event - a theory (and its validation)."

          You needed to stop at the first comma. I can predict perfectly the probably your lottery numbers will come up. I can't tell you whether they will. Likewise the spin of an electron in a magnetic field.

          "That's something that comes from the "science" of economics, which is still unable to deliver real theories (and their proofs), and so builds on statistical data, and delivers wild and often wrong predictions, but most pretend it works because they make money on it."

          I've been reading Simon Wren Lewis's Mainly Macro blog and delving into the theory, and I think you're unfair on economics. I think they have a good understanding of some of the core basics. And while short-term predictions are always tricky, they can get the long term trends right. Unfortunately, the truth has that well known left wing bias, and gets drowned out by politicians and media.

          I think the future will look back on our economics in the way we look back on the astrophysics of Galileo: shut away by vested interests.

    2. smudge

      Re: Transparency...

      Someone can tell you the architecture of their AI, and all the weights of the trained network, but it doesn't tell you why it makes any particular decision. Perhaps we have to wait until AI is conscious enough to explain itself.

      OK, everyone can call me naive and shoot me down in flames, but is there any reason why an AI CAN'T tell you why it has made a particular decision?

      It's just a computer program. It must have followed a particular series of steps and decision points. Why can't it log these along the way? (Just like I did years when I used to insert code into programs to help debug them.) Even if it has derived the rules it is using - rather than the rules being explicitly coded into it - then the rules must be represented somehow, and its path through them must therefore be loggable.

      I know nothing about the size and complexity of today's AIs. Answers such as "it would slug the performance" and "it would generate too much information" would be perfectly acceptable responses to the question "why DOESN'T an AI tell you why it has made a particular decision?".

      But "why CAN'T it..." is a very different question. I see no reason why not - they are only finite state machines after all, albeit with an awful lot of states and state transitions.

      1. Anonymous Coward
        Anonymous Coward

        Re: Transparency...

        I have written code in the past that analysed data in a "fuzzy" way. It had its pathological weaknesses and could interpret some cases wrongly. Fortunately they were very rare - so usually it gave good answers.

        Sometimes the good answers were counter-intuitive and looked wrong. It was hard work checking how it had arrived at that result. It doesn't need much complexity for the human brain to get overloaded. That was why the tool was produced - to abstract large volumes of sometimes complex data into a form that was easy to assimilate by a human.

        What was obvious was that in the hands of someone inexperienced they could make serious mistakes. You needed the experience of doing things the hard manual way. Only then could you recognise possible anomalies - and have the skill to go through the code/data to understand what had happened.

        You could also buy expensive applications that purported to do the same job with that data - and people made a lot of wrong diagnoses from trusting those results. Even the products' support people didn't understand the way they could fail.

        It was the old problem of human nature - if something is printed with nice pictures then it must be true - and you don't need to learn how it is done.

      2. Brewster's Angle Grinder Silver badge

        Re: Transparency...

        "It's just a computer program. It must have followed a particular series of steps and decision points?"

        Imagine the data is a photo and the program is a fancy filter effect. The filter is just maths: it produces the new pixel by combining pixels in the old image.

        Now try asking why the filter looked good on one photo and bad on another. For each pixel in the output we can say it added 1% of the pixel to the left, 2% of the pixel two to the left, and so on, but that doesn't really help us understand why the first photo looked good and the second looked awful. It's the interaction between data and "algorithm" that produces the result by the magic of arithmetic.

        1. smudge

          Re: Transparency...

          For each pixel in the output we can say it added 1% of the pixel to the left, 2% of the pixel two to the left, and so on, but that doesn't really help us understand why the first photo looked good and the second looked awful.

          Thanks, but that's not answering my question. I wanted to know "why can't an AI explain how it came to a particular decision?". Not "was that a good or bad decision?".

          In your example, explaining what happened to the pixels is simply describing what the program did. My question would then be "how did the program decide to do these things?".

          Whether or not the processed photos look good or bad - or whether an AI's decision was good or bad - is not the question that I was asking.

          1. Brewster's Angle Grinder Silver badge

            Re: Transparency...

            In your example, explaining what happened to the pixels is simply describing what the program did. My question would then be "how did the program decide to do these things?".

            It didn't decide to do anything. The input photo is all the data collected about you. The output photo might be a single pixel describing your credit rating. And the filter is the entirety of the program. So our program might just be an excel macro creating a weighted sum of all the factors pertaining to your credit history.

            This filter got built by data "scientists" who ran it over data for people where the results were known in advance, and tweaked those weights (1% of this pixel, 2% of that pixel) until the filter produced the results expected for the people whose histories were known. And then it was let loose on your data.

            1. smudge

              Re: Transparency...

              It didn't decide to do anything. The input photo is all the data collected about you. The output photo might be a single pixel describing your credit rating. And the filter is the entirety of the program.

              One of us isn't getting this, and I don't think it's me!

              To derive a description of my credit rating from all the data about me, the program/filter/macro/neural net/AI must have followed a finite number of steps of sequence, selection and iteration.

              All I'm asking is why people think that that cannot be logged and output - ie why the AI cannot explain how it arrived at an outcome.

              1. Fenwick

                Re: Transparency...

                How a neural network arrived at an outcome (the algorithm, its parameters and its input) can easily be recorded. Understanding why the algorithm gave a particular output is much harder.

                1. Data is used, with another algorithm, to set the parameters of the neural network. There are lots of parameters and they all depend upon each other and the data in complicated ways. Changing one parameter a little bit, may, or may not, have a big impact upon all of the other parameters. This algorithm takes a computer ages to run. It is a huge, perhaps impossibly huge for a human, job to find out and understand why the parameters are set the way they are.

                2. Your input is fed to the neural network. It proceeds through the network with numbers growing and shrinking in complicated ways according to the parameters. At the end an output is generated. If you are really careful, you might be able to say that certain parameters fixed the decision and others weren't so important. With more effort you might be able to find out if the decision was robust to small changes in the parameters or your input. But what I think you really want is to know why the parameters are set the way that they are. In a complicated problem, nobody knows.

              2. dgc03052

                Re: Transparency...

                It didn't decide to do anything. The input photo is all the data collected about you. The output photo might be a single pixel describing your credit rating. And the filter is the entirety of the program.

                One of us isn't getting this, and I don't think it's me!

                Sorry, but afraid not. You are indeed missing it. There are no decision trees, or state machines in machine learning / neural nets (in they way you appear to be thinking).

                Your retina and brain is comprised of neurons. How do we ask it how you decided you just saw a cat? Big hint, it's not the way you might think. We can slightly describe how it actually happens in terms of layers that look for vertical edges, horizontal edges, motion, image convolutions, and so on. That's all you can get out of machine learning.

                Even better comparison - how do you recognize someone's voice? Can you describe an average friends voice well enough so that someone who has never met them will uniquely identify them the first time they hear it? If you magically tracked all the neural activity, you would have worthless information about relative weights of harmonics and frequencies and time delays, but it still results in either recognition, familiarity or not. Even with all the details, it doesn't tell us what voices might be easily misidentified, or who could do a good impression of that person.

              3. Ken Hagan Gold badge

                Re: Transparency...

                "All I'm asking is why people think that that cannot be logged and output - ie why the AI cannot explain how it arrived at an outcome."

                That log would be perfectly easy to generate. However, it would take you weeks (or more) to read it and you would be none the wiser at the end of the experience as to why the computer had said "no".

                Put another way, the computer does not have a reason, it merely has a very long calculation. Many moons ago, its designer discovered that the result of the calculation was fairly well-correlated with his or her own prejudices, at least on a test data set, and that designer therefore decided to use it as a substitute for making the decision themselves.

                As long as everyone understands that it is a mere corrrelation on a mere test dataset and is being used as a substitute for an equally (but differently) flawed process of human judgement, there isn't a problem.

              4. Filippo Silver badge

                Re: Transparency...

                "To derive a description of my credit rating from all the data about me, the program/filter/macro/neural net/AI must have followed a finite number of steps of sequence, selection and iteration."

                Yes, and you can log them. However, there is only a single step. That step is a function call that takes as parameters your profile data - plus several thousand numbers that represent the network's weights. Those are the problem. The function's body is relatively simple; it just does some fairly trivial math on all of those parameters, producing a new set of numbers (which may be larger or smaller than the one you started with). This math is done in a single chunk; no divide and conquer here. This is iterated a small number of times; the final output is your credit rating.

                The function does not encode the "reasoning" that brought the decision. That "reasoning" is encoded in the network weights, the thousands of parameters. Unfortunately, those parameters are nameless and have no semantics attached, because no human set them; they were set by the network itself during training. That would already be enough to make the process inscrutable.

                But it gets worse. Not only you don't know what each of those parameters mean - they don't even *have* an individual meaning. There isn't one or a few weights that encode "prejudice against black men"; there isn't one parameter that is the weight given to your age. Rather, that information is encoded as relationships between weights. You don't know which ones, or which relationship. Which means that if you try to change one of them and run the function again in an attempt to see what your change did, you will find that the output is different for *all* possible inputs, because by changing a single parameter you have changed the relationship it had with *all* of the others.

                Basically, yes, you can log everything the network does, and you can track the calculation, but this gives you absolutely no information on *why* it does it.

          2. DavCrav

            Re: Transparency...

            "Thanks, but that's not answering my question. I wanted to know "why can't an AI explain how it came to a particular decision?". Not "was that a good or bad decision?"."

            The answer is usually 'I was trained on this dataset and the model that fits this best looks like this. So I am using that.'

      3. Anonymous Coward
        Anonymous Coward

        Re: Transparency...

        OK, everyone can call me naive and shoot me down in flames, but is there any reason why an AI CAN'T tell you why it has made a particular decision?

        The answer is simple - what they are calling AI is not an Artificial Intelligence, it is just a dumb expert system and therefore only follows the rules the programmer installed.

        At the moment AI is a buzz word of the marketing wonks that has nothing to do with intelligence and everything to do with selling a product - a computer program.

        1. Brewster's Angle Grinder Silver badge

          Re: Transparency...

          "it is just a dumb expert system and therefore only follows the rules the programmer installed."

          AIUI expert systems were giant if-then-else statement (decision trees) so we could always work through them and find the rule that triggered a decision. The issue here is the programs are statistical relationships (convolutions).

      4. rdhood

        Re: Transparency...

        "OK, everyone can call me naive and shoot me down in flames, but is there any reason why an AI CAN'T tell you why it has made a particular decision?"

        In every situation, it can. BUT that does not mean you would understand it. Supposing I have some complex mathematical relationship with some changing natural phenomenon. By the time you get around to asking "but why?", the natural phenomenon that formed part of the input to the decision is gone, never to be reproduced, and the mathematical equation from which we derived the results might take you 4 years of college level mathematics to understand.

        YES. AI can tell you why it made the decision. But you will not understand , nor will you be able to check, the answer.

    3. a_yank_lurker

      Re: Transparency...

      AI is not intelligence in any meaningful way. My cats are more intelligent than any AI algorithm because they are able to learn in a real way. Ai does not learn but spits back what pseudo-statistical analysis trash it is programmed to do.

      One of the problems with these 'scoring' algorithms is they do not or can not in the underlying reasons for the score. How do weight health issues for example in a credit scoring or job application? I can not see a good way with someone actually reviewing the file.

      1. Charles 9

        Re: Transparency...

        In other words, logging wouldn't help you because the decisions involved are too technical, too inexact, or too numerous for the average person to follow.

        Kinda makes me think of Farscape here. Translator Microbes are supposed to be able to grok most languages: even the highly-nuanced language of Pilots. But they can't translate Diagnosians, whose language is SO vast, meticulous, and detailed it puts Pilots to shame.

  4. handleoclast
    Alert

    This is a known problem with a known solution

    Let me summarize the problem:

    1) AIs can have hidden bias caused by poor datasets and/or algorithms.

    2) With certain types of algorithm, particularly neural nets, it can be impossible to figure out what rules the AI is using to reach its decision, and therefore impossible to know whether or not the decision is biased other than by statistical analysis over many trials.

    Now let me summarize a parallel problem:

    1) Humans can have hidden bias caused by poor teaching.

    2) Humans use neural nets, so it can be impossible to figure out what rules the human is using to reach its decision, and therefore impossible to know whether or not the decision is biased other than by statistical analysis over many trials.

    What's the difference between a neural-net based AI and a neural-net based human? Scale. But that only makes it harder to know what the larger neural net in a human is really doing (as opposed to analysing results).

    The solution that was applied to the human problem? Procedures and rules designed to stamp out individuality (and creativity, and intelligence, and adaptability). I.e., what the civil service uses to ensure you get a consistent result no matter which individual deals with you. It may be consistently bad, with little hope of correction because overseers are bound by the same rules the underlings are, but it is (relatively) free from bias.

    If we ever achieve strong AI, that is artificial sapience, it's going to be as biased and stupid as we are. But it may be a lot faster at being so. Forget the singularity with a god-like AI that is compassionate, caring, loving, and wise (as the Xtian God is meant to be), think ancient Greek and Roman gods (and the Judaic JHVH). Those god were essentially humans with all the standard human failings (stupidity, greed, petulance, laziness, anger, jealousy, etc.) with some added magical powers. If the singularity ever happens, it's not going to end well for humans.

    1. chelonautical

      Re: This is a known problem with a known solution

      > If we ever achieve strong AI, that is artificial sapience, it's going

      > to be as biased and stupid as we are. But it may be a lot faster

      > at being so.

      Yes, faster and also much more pervasive. It's likely that some of these systems (e.g. the larger, more famous, systems run by the likes of Facebook and Google) will be regularly consulted by most organisations due to the sheer volume of data they hold about many people. We could end up with a situation where a handful of AIs (or AI-like systems) have huge influence over people's ability to find credit, insurance, employment and more.

      Biases, errors and omissions in these systems could have a detrimental effect on most aspects of people's lives and could follow them around inescapably. The combination of automated decision-making and widespread dependency on a limited number of "AI" providers could result in life-long automated and repeated problems.

      > If the singularity ever happens, it's not going to end well for humans.

      Yes indeed. Also it could go badly for many humans long before then.

  5. Redstone

    There is a flip side...

    Playing Devil's advocate here - maybe there is very little bias in the algorithms but as the outcomes don't match the biased expectations of the observers, that is the actual reason they are calling foul.

    I don't know the state of bias that has been coded into AI, but then, I don't think that these researchers do either. I have an inherent distrust of 'researchers' whose basic assumption sets essentially have a pre-determined outcome for their research.

    1. JimC

      Re: There is a flip side...

      More than that, people actually *want* biased results. I saw an interesting set of observations from someone who was working on targeted advertising. He said they had a horrible problem with racist results.

      What seems to have actually been happening was that his software was identifying that individuals with string X set to W were more likely to click through on one set of ads and individuals with string X set to B were clicking on a different set. So they eliminated string x from consideration.

      Next the software identified individuals with at least two of strings A, B and C set to one value as being more likely to click through on one set, and vice versa. And guess what, there was a strong correlation between having two of those 3 set and having string X set to W, and so it went on.

      Gradually they eliminated every decision point that could be said to be racist. And their click through rate plummeted.

  6. Pascal Monett Silver badge

    Can we stop using the term AI please ?

    This is not AI, this is machine learning. It is an algorythm that takes input data, correlates it with existing data sets and measures a response based on imposed criteria. Theoretically, you could design a Babbage computer to do the same thing.

    AI, as in Artificial Intelligence, would take the input data, evaluate it to decide whether or not additional data was needed, go fetch any Personnel info available, muse a bit about how numbers could not actually measure a human being's worth, and put a note in the final evaluation saying "Check next semester's results", or something like that.

    If it were truly intelligent, it might also request an interview with the teacher in order to better evaluate that person on an individual level. It might require watching videos of her class, in order to better evaluate her teaching skills in situ. In other words, an actual AI would evaluate her, not just a bunch of numbers.

    We don't have AI. Stop using the word.

    1. Anonymous Coward
      Anonymous Coward

      Re: Can we stop using the term AI please ?

      +100

      The marketing wonks have grabbed that term for their own use (it carries the implication that what is produced by the expert system is somehow correct).

    2. Ken Hagan Gold badge

      Re: Can we stop using the term AI please ?

      "We don't have AI. Stop using the word."

      I sympathise, and have posted similarly in the past, but those two sentences don't actually conflict.

      Yes, we don't have AI, but that doesn't necessarily mean people should stop using the word. "AI" and "algorithms" and "machine learning" have (in certain contexts) become pretty accurate markers for "You can stop reading now, unless you are really bored and enjoy a good laugh."

      .

  7. sebt
    Stop

    Bunch of snake-oil salesmen

    "With algorithms increasingly making key decisions about our lives, it’s important not only to be properly represented in the data they’re considering, but to understand how they’re reaching their conclusions."

    No, that's not enough. The only thing that will make algorithmic decision-making in these areas acceptable is a special kind of algorithm. This kind of algorithm would not only be open and understandable. It would also be explain to explain how and why it came to a particular decision. And over and above that, it would be able to _take responsibility_ for that decision.

    There are approximately 4 billion of these algorithms moving about on the planet. (Slightly fewer, OK, if you exclude children, the senile and those with debilitating mental illnesses). And we already have heuristics, albeit imperfect ones, to select the most able of these algorithms and empower them to make decisions about sentencing, credit and so on.

    What benefit is supposed to come from cracking this problem - the hardest AI problem of all? When we have perfectly good techniques to do these jobs already?

    The answer is that the whole project is intellectually dishonest from top to bottom. For example, it's not actually trying to crack this hard AI problem at all - while simultaneously (and inconsistently) claiming that these algorithms can do the job not just just as well, but better than the human equivalent.

    It has no aim except to contribute to the general contemporary deskilling and disempowerment of humans, while making as much money as possible for the charlatans who seem to be able to pull the wool over the rubes' eyes sickeningly easily.

  8. amanfromMars 1 Silver badge

    Well, well, well ..... whoever would have a'thunk it?

    Hmmmm :-)

    For all and any of you who be not thinking, and venturing that amfM1 knows not what needs to be shared and talked about, ponder on the similarity towards singularity between what

    was said on 8 May 2017 at 09:01 by Danny Bradbury of El Reg, ….Algorithms are calling the shots these days, and they may not be as impartial as you thought. …. and the alien version, voiced Mon 8 May 08:11 …….And IT has a SMARTR Mind of its own and AIdDevelopments and The Almighty HyperRadioProActive Algorithm ensures and assures and insures the Future against the Failures of the Past perverting the Present. …. Allow me to FTFY, AC

    Spooky eh, and with both of us not wrong is the power of the force unleashed not simply doubled but massively squared. With three in the crowd is its energy cubed. Very soon is almighty not an exaggeration.

    1. Naselus

      Re: Well, well, well ..... whoever would have a'thunk it?

      Oddly beautiful.

  9. Anonymous Coward
    Anonymous Coward

    If we can't get bias out of society, how do we get it out of algorithms?

    It could be argued that the reason many people would refer to an unknown computer programmer as 'he' or assume a gang member arrested in a low income neighborhood is black, is the same thing. Our biases are a product of our exposure to information, so unless algorithms can be given a source of information other than the real world, I'm not sure how easy this will be to solve.

    Rather than trying to feed an algorithm PC-approved cleansed data, maybe it should be taught about bias. Oh wait, that would require these "AIs" to actually have some artificial intelligence, instead of just being large relational databases with clever input methodology.

  10. allthecoolshortnamesweretaken

    "More important still, is the ability to inquire should you fall through the algorithmic net."

    Sure. Once you get past the algorithm that deceides whether you are eligible to inquire / appeal or not, you're good.

  11. evilhippo

    So this seems to be suggesting that the problem with algorithms is they are not reflecting the *correct* political biases? That they might actually reflect reality in unpalatable ways, for example by daring to notice that in the real world fewer women are less interested in STEM for perfectly understandable preferences?

  12. Anonymous Coward
    Anonymous Coward

    Who decides?

    Here are a few concepts where human beings can't agree on a definition:

    - "rich"

    - "beautiful"

    - "fair" (as in even-handed between cases)

    So if the humans can't agree about reasonable definitions, why should we believe that computer programmers and computers can assess these concepts "correctly"?

    1. sebt
      Flame

      Re: Who decides?

      You're not getting it - you're insisting that "rich", "beautiful" and "fair" have some (however fuzzy) accepted definition, which most people would agree on, while arguing endlessly about the details. How 20th C! Get ready for some "disruption", and get with the new definitions:

      Rich: Us, who manufacture this pile-of-shit expert-system software wearing AI drag. You, possibly, if you're in a position of power and decide to be nice to us.

      Beautiful: Don't use this word. There's a startup called Beautfl, backed by a bunch of ravening VCs, and they now own their brandname, all words vaguely similar to it, all equivalents in all world languages, and all words vaguely similar to _those_. You'll be hearing from their lawyers shortly.

      Fair: Whatever helps us sell our shonky software. See Rich.

    2. katgod

      Re: Who decides?

      The beauty of language is it is vague, which is why it is very hard to determine very precisely what anything is without a bunch of numbers, although it is difficult to impossible to put any emotional word into numbers. When you read a book you can often make your own beautiful vision of the words but when someone else puts the words into pictures it may no longer resonant with you. I can say you without know anything about you, can a computer ever grasp you as a person, if you're like me anyway I have a pretty good idea of you.

  13. Tom Paine

    Paradox of Automation

    Isn't this just another (quite important) example of the Paradox of Automation?

    https://en.wikipedia.org/wiki/Automation#Paradox_of_Automation

    Most aeroplanes are mostly flown by computers these days. When the autopilot fails, the human pilots can't cope. I was absolutely gobsmacked to learn the Air France flight crew who failed to recover from a stall at, what was it, 35,000 feet? -- had never practiced stall recovery in training *or even in a simulator*. Result, it fell out of the sky like a brick, killing hundreds of people in what should have been a completely survivable incident.

    (You can't get a PPL without lots of practice of that situation, but then Cessnas and Pipers and whatnot spend far greater proportion of their flight time under manual control (certainly at the low end, anyway, I imagine light jets and the like have extensive automation.)

    1. katgod

      Re: Paradox of Automation

      As a pilot you have to trust your instrumentation because the head can easily be fooled by what it feels to be true. So when some of your instruments are lying to you and others are telling the truth as what I read happened in this flight, things can get very dicey. Look at all the test pilots that have been killed, it is not because they are dumb, or don't have experience, it is because if you get in enough bad situations the odds are that one of them will get you. Could a different pilot, co-pilot have gotten out of this, possibly. If they had never practiced stall recovery that would be bad but I have a hard time believing that without knowing the source of this information.

  14. Phukov Andigh Bronze badge

    it's not the machines that have the bias.

    the "algorithm" is the EXCUSE. it's what Facebook falls back on when it conveniently censors things its owners are on record opposing. it's the excuse given for, as the article said, rejecting credit. Its the excuse that rejects qualified applicants because the employers have someone in mind already.

    the algorithm is merely a REFLECTION of the policies that it's owners explicitly desire.

    Because if the algorithm did NOT do or act as its owners wanted, it would not make it past testing as it exhibits "undesirable" performance-or the system would be "modified" as soon as "undesirable" behavior was observed and reported.

    Algorithm is a boogeyman and scapegoat term nowadays. Because most people simply don't understand how they actually work and are actually deployed. And there's no representation or defense for the poor system blamed for whatever scandal, outrage or whatever that's brewing.

  15. Rather Notsay

    Duh

    Computers are rational: that's why they're not leftists.

  16. Anonymous Coward
    Anonymous Coward

    Northpointe's Biggest Error

    Besides being mindless bigots, of course.

    Even if dark skin correlates with higher recidivism, it's irrelevant. Does having extra melanin actually cause you to be criminal? This is correlation without causation.

    Economic and social disadvantages may well be more common among slave descendants than among others, I dunno. Such factors, however, are also common among other groups as well. But even using this, which may well be causation, would still be unjust. This particular person may not be behaving badly at all. It is unjust that the computer should put this person into a group it defines and then penalize him for being in that group.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like