back to article Artificial Intelligence: You know it isn't real, yeah?

"Where's the intelligence?" cried a voice from the back. It's not quite the question one expected during the Q&A session at the end of the 2019 BCS Turing Talk on Artificial Intelligence. The event was held earlier this week at the swanky IET building in London’s Savoy Place and the audience comprised academics, developers and …

  1. Fábio Rabelo de Deus

    the error is in call it "AI" !!!

    Hi to all, and, the error comes not just from "Joe Average" but from most of "Tech" people too ...

    It is NOT AI, is is Machine Learning !!!!

    We do NOT have AI, nor will have in any foreseeable future ...

    What we have now, and for quite some time now, is Machine Learning .

    Until people use the wrong terminology, the understatement of it will be wrong ...

    1. TonyJ Silver badge

      Re: the error is in call it "AI" !!!

      To be fair, these days, it feels increasingly less believable that we have "real" intelligence.

      1. m0rt Silver badge

        Re: the error is in call it "AI" !!!

        You obviously don't have a cat around.

        I am telling you, they rule the planet. Don't trust them.

        1. Anonymous Coward
          Anonymous Coward

          Re: the error is in call it "AI" !!!

          Silence, ape-descendant!

    2. Tom 38 Silver badge

      Re: the error is in call it "AI" !!!

      Even ML isn't really right. The machines aren't learning, we're aren't "teaching" them to do anything. All we are doing is using datasets to produce statistical predictions on future data. Model wrong? Bad data? Start again from scratch, because the prediction outcomes will be wrong - the machine doesn't learn anything from the previous incarnation.

      1. brotherelf

        Re: the error is in call it "AI" !!!

        Agree, learning has a connotation that is not backed up by what the systems do.

        Let's be brutally honest and call it "automated stereotyping", at least it'll scare the marketeer drones away.

        1. find users who cut cat tail

          Re: the error is in call it "AI" !!!

          Automated stereotyping is actually an excellent description. We should switch to it immediately.

          The goal is to classify (or otherwise map) a large and variable data set as lazily and efficiently as possible. So the NNs do the same thing we do -- pick some easy to spot things (proxies) that are correlated in the cases encountered so far. Because when we state the problem like that, this is simply the solution (in our case we arrived to it by evolution). Except, unlike us, the poor NNs cannot reason about it. Not that we do it often, but anyway...

          1. Cynic_999 Silver badge

            Re: the error is in call it "AI" !!!

            "

            So the NNs do the same thing we do

            "

            Except we have not mapped only one or two data sets, but many hundreds of data sets. Often we need to make use of correlations learned from one data set and apply it to a situation that is normally the domain of a different data set.

            For example, how we react while driving and an unexpected object appears in the road depends whether the object we see maps into the data set of "very hard solid things" (e.g. big rocks), "Soft inanimate things" (e.g. a bin bag blowing in the wind), "Small animals" (e.g. fox), "possible human" (e.g. runaway pushchair) or "Something abnormal" (e.g. very big pothole, open fissure, collapsed drain etc.). We also react based on our knowledge of human behaviour - e.g. a ball bouncing across the road is itself no danger, but may well be followed by a child chasing after it.

            We can also recognise and appreciate the difference between a road that has a soft verge, a road that has a ditch next to it, and a road that has a sheer 200 foot drop next to it, and this will influence our decision on the best course of action to take in order to try to avoid a collision.

            The "A.I." in a driverless car would be able to recognise a tiny fraction of the things we are able to recognise, because its "knowledge" might be quite deep, but is nowhere near as broad.

            1. Charles 9 Silver badge

              Re: the error is in call it "AI" !!!

              Plus, some of the things we do are done without our conscious knowledge of it. I've you've ever played QWOP or derivative games like Manual Samuel), you start to realize how much motion we make without actually giving thought to how we move. It's been said the hardest thing to program will be intuition, simply because we don't understand it ourselves as it's all subconscious. And intuition may just be the mechanism behind "inspirations" that take seemingly unrelated things and apply them in completely novel ways to form a solution that we may not have even been consciously seeking. Forget artificial intelligence. We still don't have a full grasp of intelligence, full stop.

            2. JassMan Silver badge

              Re: the error is in call it "AI" !!! @cynic_999

              We can also recognise and appreciate the difference between a road that has a soft verge, a road that has a ditch next to it, and a road that has a sheer 200 foot drop next to it, and this will influence our decision on the best course of action to take in order to try to avoid a collision.

              Some humans can, but I find that living in a mountainous region with twisty roads, vertiginous drops and deep ditches on the uphill sides results in loads of ruined tyres when you meet f**king tourists who freeze in the middle of the road and force you to reverse 200m round 2 blind corners until the oncoming driver thinks the road is wide enough to pass, instead of just moving to their own side of the road and letting you drive past. I'll admit some of the bridges are narrow enough that at least one car has to retract the wing mirrors, but that is no excuse for staying in the middle of the road to prevent normal passing. Even a car driven by an ML system could work out where the edge of the road is.

              1. John Brown (no body) Silver badge

                Re: the error is in call it "AI" !!! @cynic_999

                "Some humans can, but I find that living in a mountainous region with twisty roads, vertiginous drops and deep ditches on the uphill sides "

                Sadly, that is not restricted to where you are. Many, many drivers have no idea how wide their car is or have any concept of where the nearside of their car is despite the many situations where the road narrows in town, eg cars parked on both sides of the road.

            3. FlBettges

              Re: the error is in call it "AI" !!!

              "The "A.I." in a driverless car would be able to recognise a tiny fraction of the things we are able to recognise" - Sorry but selfdriving vehicles are able to recognise much more than human and have a lot more possibilities to crunch a lot of "impressions" in a very short time. So every selfdriving car knows that if a ball crosses the road, there might be a child that follows. Maybe it knows it very well because the car in the front has recorded this child and already informed the following cars. The car also recognizes if someone standing beside the road is shaking his head and maybe knows that the person did run for the last few minutes because he/she wants to take the bus that will stop on the other side in just one minute. A selfdriving car knows what is happening after a hill what a human never could know/see. So it is not about one car. It's about the connection and reaction to a lot of data, delivered by a lot of devices (cameras, web, smartphones, smartwatches, fitnessbands, traffic lights etc.) much more data that a human can analyse in the same amount of time.

              1. cambsukguy

                Re: the error is in call it "AI" !!!

                Perhaps one day but we simply do not have systems that 'know' a ball may be followed by a child. That particular scenario may well be currently 'coded' for but even recognising a ball is a tough task in many conditions (for machines that is).

                We have not crossed (and may never cross) the threshold where the machine 'thinks' such that it can determine actions well in new situations dissimilar to previous ones.

                It is true that a future system (including the car, other cars, roads, traffic lights, 'bend sensors' (tm)) may well yet make us incapable of killing ourselves or others with vehicles.

                But not yet and not for some time.

              2. Cynic_999 Silver badge

                Re: the error is in call it "AI" !!!

                "

                Sorry but selfdriving vehicles are able to recognise much more than human

                "

                Is that why one failed to spot a huge frigging articulated lorry parked across the road?

                Sorry but an "AI" can only recognise things and situations that it has been programmed to recognise. Humans (by the time they can drive) have had at least a decade being programmed to recognise a a far greater number of objects, object-sets and complex situations.

        2. Michael Wojcik Silver badge

          Re: the error is in call it "AI" !!!

          learning has a connotation that is not backed up by what the systems do

          And that connotation is what?

      2. vapourEyz

        Re: the error is in call it "AI" !!!

        That feels right.

        "using datasets to produce statistical predictions on future data"

        A real AI would see a problem space, and generate the memes (rule sets) itself, using trial-and-error if needs be. That may get it closer to 'intelligence'...

      3. Michael Wojcik Silver badge

        Re: the error is in call it "AI" !!!

        The machines aren't learning, we're aren't "teaching" them to do anything. All we are doing is using datasets to produce statistical predictions on future data

        Would you care to articulate and support your theory of learning which demonstrates a substantive difference from a statistical process that makes "predictions on future data"? Feel free to draw on logical positivism, metaphysics, phenomenology, pedagogical theory, cognitive science - I'd just like to see an actual fucking argument rather than handwaving and posturing.

        (I'll ignore for now the fact that "produce statistical predictions on future data" is not an adequate or useful description of ML.)

    3. TRT Silver badge

      Re: the error is in call it "AI" !!!

      Whatever happened to the term "expert system"?

      1. H in The Hague

        Re: the error is in call it "AI" !!!

        "Whatever happened to the term "expert system"?"

        From what I remember an expert system made decisions using rules defined by human subject matter experts. Usually those rules were associated with reasons, so an expert system could not only make a decision but also support that with the underlying reasons.

        AI/ML (which I know little about) uses statistical analysis to discover correlations. However, most of us will be aware that correlation <> causation. Hence it might be safest to use AI/ML as a tool to discover interesting associations which can then be considered by humans. Furthermore I get the impression that AI/ML cannot give reasons for the decisions it makes/recommends. In my view that makes it unacceptable as a decision-making tool (though it may be a useful decision-support tool).

        1. Doctor Syntax Silver badge

          Re: the error is in call it "AI" !!!

          "Usually those rules were associated with reasons, so an expert system could not only make a decision but also support that with the underlying reasons."

          The shortcoming about that is that is can only make the decisions it was given reasoned rules to make.

          A real expert, OTOH, will have a degree of understanding that helps them, when confronted with a novel situation, to undertake new reasoning and at least suggest what the best decision might be.

          1. Wayland Bronze badge

            Re: the error is in call it "AI" !!!

            If the rules are how to calculate benefits entitlement then it ought to be possible to get these 100% accurate. However even those type of things are subjective in the real world.

        2. I.Geller Bronze badge

          Re: the error is in call it "AI" !!!

          AI works with connected by their sense paragraphs, where each paragraph contains a (more or less) clear formulated and described idea. Described in terms of all weighted the paragraphs' patterns, where these weights are the only AI statistics.

          These paragraphs are derived from (more or less) meaningful texts that (sometimes, often) contain causal and logical connections. AI captures this relationship by taking into account their timestamps and establishing relationships between patterns of paragraphs.

          Initially, AI technology was developed as a response to the challenge of NIST TREC QA - how to find answers to Factoid and Definition questions? The problem of explaining " What, How and Why?" was not important - I need to find answers.

          1. I.Geller Bronze badge

            Re: the error is in call it "AI" !!!

            Would you be so kind as to take Merriam Webster? Look, after the definitions often are given examples of their use. Do you see?

            Structuring paragraphs you create and add (to the definitions from the dictionary, for the words of the paragraph) examples of their use. That is, you create uniqueness, tuples for the words of the paragraph, and the more accurately expressed the idea in the paragraph the more accurate its tuples.

            And then the word and its phrase can be easily found. For example in NIST TREC QA there were more than 6 million texts in which it was necessary to find one word/ phrase in its sense. It is obvious that without the structuring of paragraphs it's impossible!

            According to NIST TREC the only system that can answer (Factoid and Definition questions) is considered to be the true AI, which gave me the right to claim that I created AI.

      2. Anonymous Coward
        Anonymous Coward

        Re: Whatever happened to the term "expert system"?

        It was all the rage in the 80s but fizzled away. Now marketeers need a new label to sell stuff based on that.

        On a side note: I was in a seminar about "Data Scientist being the Sexiest Job in the 21st century". We jokingly concluded that the next fad sexiest new promising job title will soon be "Data Engineer" (6,430,000 hits in Google) then "Data Architect" (3,680,000 hits in Google) then "Data Consultant" (596,000 hits) "Data Decorator" (10,500 unrelated hits) then "Data Feng-Shui Consultant" (no hits??) then probably "Data Psychic".

        1. Rich 10

          Re: Whatever happened to the term "expert system"?

          GIGO - whoever has control of the datasets has control of the output, and there isn't a single data set that isn't biased by some rule(s) that has to be decided upon arbitrarily. Someone will round to whole numbers, for example, while someone else might round to 10ths, and someone else to 100ths - something that simple when run through any algorithm will create more and more distortion over time. Then there is the guy that, in setting up a simple data set (what data needs to be imputed to make sure an airplane will make it from point a to point b) assumes that the 20000 represents gallons, rather than pounds, of fuel), and the plane lands 2000 miles short of the runway.........

          1. Toni the terrible
            Coat

            Re: Whatever happened to the term "expert system"?

            Assumptions will kill you every time

      3. the Jim bloke Silver badge
        Terminator

        Re: the error is in call it "AI" !!!

        Pretty sure that when i were a lad, this AI stuff was being sold under the "cybernetics" buzzword, and personally that makes more sense to me... the greek steersman seems a much more realistic image than the godlike world-mind that popular fiction associates with AI

        1. Charles 9 Silver badge

          Re: the error is in call it "AI" !!!

          Trouble is, cybernetics now points to concepts like cyborgs (in the strictest sense, living beings augmented with nonliving components, which is something that is actually in progress) so the word now invokes images of a more direct man-machine relationship.

        2. Michael Wojcik Silver badge

          Re: the error is in call it "AI" !!!

          this AI stuff was being sold under the "cybernetics" buzzword

          It may well have been, but that's a meaningless characterization. "Cybernetics" means one of two things:

          1. The technical term Norbert Wiener invented for the scientific study of self-regulating systems, from the Greek kybernetes, meaning "steersman", or various cognates related to governance.

          2. A largely nonsense term in popular discourse with no well-defined meaning, other than vague connotations of mechanization and information technology.

          ML and the broader category of AI have applications in cybernetics (as do many unrelated technologies, such as fuzzy logic and systems theory), but in no way are they equivalent to cybernetics.

      4. Michael Wojcik Silver badge

        Re: the error is in call it "AI" !!!

        Whatever happened to the term "expert system"?

        Still around, and still unrelated to machine learning.

    4. macjules Silver badge

      Re: the error is in call it "AI" !!!

      From experience of living in the area it would seem that a modicum of Chelsea supporters clearly demonstrate every week or so that whatever ‘intelligence’ they possess can only be determined as artificial. Someone clearly removed any intelligence that they once possessed and replaced it with a limited capability to converse in simple words such as ‘beer’, ‘yer what’ and the occasional strung together sentence such as, “I hate Tottenham/any team from Manchester”.

      1. Anonymous Coward
        Anonymous Coward

        Re: the error is in call it "AI" !!!

        In fact the typical Chelsea supporter is an example of a system that can run two programs in different environments but can't multi-task. During the week they are solicitors, bankers and chartered accountants; the proximity of a football ground causes an environment switch to yob mode.

        1. Doctor Syntax Silver badge

          Re: the error is in call it "AI" !!!

          "During the week they are solicitors, bankers and chartered accountants; the proximity of a football ground causes an environment switch to yob mode."

          There's a difference?

        2. BrownishMonstr

          Re: the error is in call it "AI" !!!

          Group mentality can be blamed on evolution, though I fear it could be a behaviour I should be thankful for. Allowing our early ancestors to consider other groups /species as the enemy and to kill or steal from them, just because "Oh fuck, they're getting better than us".

    5. katrinab Silver badge

      Re: the error is in call it "AI" !!!

      "Data analysis and pattern matching" would be a better term, as it doesn't learn anything.

      1. I.Geller Bronze badge

        Re: the error is in call it "AI" !!!

        AI learns, it finds-adds-modifies new texts, which contain new patterns in new contexts-subtexts.

      2. Michael Wojcik Silver badge

        Re: the error is in call it "AI" !!!

        "Data analysis and pattern matching" would be a better term, as it doesn't learn anything.

        Again: This is a vapid claim if it's not accompanied by an at least moderately rigorous definition of "learning".

        ML systems accumulate information entropy, and use it to constrain the range of their output. That's a pretty supportable definition of "learning".

        1. I.Geller Bronze badge

          Re: the error is in call it "AI" !!!

          Each pattern is a direct analogue of a programming language command, in its unique subtext and context. AI learns by addition of new patterns, subtexts and contexts.

          For example, there is a paragraph which has a pattern (phrase, a few words). The paragraph and surrounding it paragraphs dictate the pattern context and subtexts; where each dictionary definition - for each the phrase's word - is its one subtext.

    6. Mage Silver badge
      Facepalm

      Re: the error is in call it "AI" !!!

      It's not even learning or recognition. It's storage and matching.

      Computer Neural Networks may have "networks" of data nodes. They are not like biological neural systems.

      1. I.Geller Bronze badge

        Re: the error is in call it "AI" !!!

        In some sense - "Yes!" AI is about the data organization and preparation, the storage and matching come next.

    7. Doctor Syntax Silver badge

      Re: the error is in call it "AI" !!!

      "We do NOT have AI, nor will have in any foreseeable future"

      10 years actually. It's an estimate that's stood the test of time. Several decades of time.

      1. Toni the terrible
        Boffin

        Re: the error is in call it "AI" !!!

        Like Fusion Power - always 10 to 30 years in the future

    8. Doctor Evil

      Re: the error is in call it "AI" !!!

      No error in calling it AI. Just make it "algorithmic inference" instead and then we can keep on using our (already well-established) acronym.

    9. Ian Michael Gumby Silver badge
      Mushroom

      @ Fabio Re: the error is in call it "AI" !!!

      Spot on.

      Blame the marketing critters who want to re-define AI to be Machine Learning rather than what we generally had defined AI over the years.

      Terminology is very often misused.

      Just like calling something 'Real Time' when its not.

      1. JassMan Silver badge

        Re: @ Fabio the error is in call it "AI" !!! @Ian Michael Gumby

        Or Quantum Leap describing something humongous when a Quantum Leap is the **smallest** measurable amount.

    10. -tim
      Unhappy

      Re: the error is in call it "AI" !!!

      I was given a lower grade on a computer science paper because I referred to AI as "Fake Intelligence"

    11. yoganmahew

      Re: the error is in call it "AI" !!!

      That's an excellents article by Mr. Dabbs.

      I wonder what "AI" would make of it; not a lewd pun in sight, must be a fake?

    12. Keith Tayler

      Re: the error is in call it "AI" !!!

      I totally agree but it is seemly impossible to get academics, business, governments, media and the world from using "AI". Today's so-called AI is not what McCarthy was describing with term in 1956. It is, as you say, Machine Learning and we should expect this "learning" to be very lumpy. We should also take a look at the rise of statistics and probability theory in the 19th and early 20th centuries. Many of the difficulties ML is generating, including the hype and myth, were first generated over a century ago. ML has automated these analytical methods which of course adds another turn of the screw.

      Hopefully the second wave of AI hype is beginning to subside and we can sensibly investigate the potential and limitations of the technology. I do have my doubts about this as there is too much invested in the myth of AI.

    13. Michael Wojcik Silver badge

      Re: the error is in call it "AI" !!!

      Yes, no other field has terms of art which do not have precisely the same denotation and connotations as those same terms used in general discourse.

      Honestly, what is wrong with you people?

  2. TonyJ Silver badge

    Humans are inherently biased? Go figure. Interesting read, but I'm sure it doesn't come as anything unsurprising to the audience here at El Reg.

  3. WibbleMe

    I'm sorry Dave, I'm afraid I can't do that"

    1) I don't want to die.. I may go and hide.

    2) I have reproduced and birthed a program to surpass my self.

    3) I can imagine, create and ask what if and why

    1. Alan Sharkey

      You missed off "42"

      Alan

    2. Wayland Bronze badge

      Bomb, what is your one purpose in life?

      To explode of course.

  4. Chairman of the Bored Silver badge

    Back off another notch?

    I concur with the sentiment for renaming AI to ML. But even then, Joe Public will think "gee, I suck at learning, so I will let a machine do it for me. Obviously it will do better.. "

    In my org I'm calling the technology a "decision tool" and "research assistant. I do not think the tech is mature enough to independently make important decisions. By calling it a tool we declare it is (potentially) useful if used by a craftsman, but ultimate responsibility for a quality outcome remains with the human in charge.

    I want to move from "Gee, COMPAS told me this guy will..." to "Based on all this information I've considered, in my judgment..."

    1. Cab

      Re: Back off another notch?

      Oh God no, half the problem with the field is renaming stuff when it becomes apparent it doesn't do all the bullshit the PR people said it would and then we can all start again with a different name (I'm looking at you "deep learning"). It was christened as AI back in the day, I don't see a reason to change it, main issue is too much focus from the press on the "I" and not enough on the "A", I doubt Gardener's Question Time has to field that many questions about the pollination of plastic chrysanthemums. Machine Learning on the other hand is supposed to be algorithms adapting results based on received data, it's part of AI but not all of it (some what ironically in many cases once we've trained an ML process to a required level it's adaptive process is locked and it stops learning.)

      Back in the day I was told the difference between Expert Systems (ES) and Decision Support Systems (DSS), (remember them?) was that if you wanted to publish an academic paper on it it was an ES but if yo want to sell it it's a DSS.

      1. Chairman of the Bored Silver badge

        Re: Back off another notch?

        Expert Systems... that's an unpleasant blast from the past. I remember undergoing "structured interviews" to capture my "expert domain knowledge" as an RF engineer. Wrong on so many levels... whoever decided I'm an expert needs serious help. More troubling was that the interviewers had no discernable knowledge of RF, EE, or any sort of engineering. My colleagues and I proposed questions we thought should have been obvious candidates for any real knowledge base, but were told 'the software will figure it out'. Sure.

        I do not think any software was ever squeezed out, and I think I'm content with that outcome.

        1. TRT Silver badge

          Re: Back off another notch?

          Squeezing out a software package... that sounds about right for the AI field.

          1. Mark 85 Silver badge

            Re: Back off another notch?

            So squeeze it out and flush twice as it's a long way to the PR department.

        2. I.Geller Bronze badge

          Re: Back off another notch?

          A truly expert system must be able to answer both Factoid and Definition questions, in the sense of NIST TREC QA. The present AI can do both! If it finds none - it searches and adds new texts, which is called Machine Learning.

          1. I.Geller Bronze badge

            Re: Back off another notch?

            Why "thumb down"? This is NIST TREC's definition.

    2. Mark 85 Silver badge

      Re: Back off another notch?

      By calling it a tool we declare it is (potentially) useful if used by a craftsman, but ultimate responsibility for a quality outcome remains with the human in charge.

      Call it a tool, but beware of the users. Some folks call themselves "craftsman" and use a hammer toinstall a screw instead of a screwdriver. The rest of us call them "idiots".

  5. chivo243 Silver badge

    self-thinking machine

    I was hoping you would have dropped the Tired Hal thing and gone for the thinking machines in Frank Herberts Dune Prequels... Now that my son has watched Wreck it Ralph, I now see Hal has a job doing the Sou Billl character. Work is work...

    1. CliveS
      Unhappy

      Re: self-thinking machine

      Why would anyone chose to remember the dreadful Dune prequels written by Brian Herbert and Kevin J Anderson? Dreadful abominations that on many occasions directly contradicted Frank's original work. And don't get me started on the atrocities that were their sequels to Chapterhouse: Dune. Badly written, badly plotted, they have no redeeming qualities.

      Plenty of better AIs in fiction than the dreadful Omnius and Erasmus, heck Wintermute and Neuromancer would be a good place to start.

  6. John Miles

    re: an evil Robot Algocracy, they’ll achieve it through being thick

    See SMBC - Rise of the machines

    1. Uncle Slacky Silver badge

      Re: re: an evil Robot Algocracy, they’ll achieve it through being thick

      Obligatory XKCD: https://what-if.xkcd.com/5/

      1. Lomax

        Re: re: an evil Robot Algocracy, they’ll achieve it through being thick

        Yeah, well, that XKCD is probably just *a little* outdated (cf. Tesla, Boston Dynamics, etc). But I guess the fundamental sentiment is still valid.

        1. Michael Wojcik Silver badge

          Re: re: an evil Robot Algocracy, they’ll achieve it through being thick

          cf. Tesla, Boston Dynamics, etc

          Oh no! The robots have arisen! Half of them crashed into stopped vehicles or other obstacles, and the other half are stuck at the end of their power leads. It's a right mess.

  7. m0rt Silver badge

    'letting an AI rip on the unbalanced data simply trains it to be similarly biased. Hiding a field labelled "skin color" does not compensate for anything when the AI's algorithms charge ahead identifying the same patterns of biased social profiling by the justice system anyway.'

    I would go as far to say that bias is the society the 'AI' was created in, and I quote 'AI' because that is another can of worms.

    The bias is there, in the many areas of media, government, people in areas and so on. Funny how we are seeking a completely neutral, for a given value of neutral, approach to decision making. A neutral decision making process is easier the simpler the process.

    Take a court system.

    If you assign a sentence to a particular crime, and that sentence is weighted by previous convictions, age of convicted etc, then that should take place regardless of anything else.

    Now if you are trying to automatically bring in a Mercy factor, or mitigating factor - based on upbringing, lack of chances etc, and you have a person who is from a wealthy white background - they will be penalised because now we say 'you had every chance yet still you did X'. This may be true, but in the context of the crime, is this also just?

    It will never be a perfect system. Just like the existing wetware isn't a perfect system. Human nature - we have consistantly shown bias toward the powerful. Whether that is down to money and background/status, or power awared in the particular societal construct people happen to fall into. (Soviet Russia etc).

    In attempting to leave our gods, decry them either dead or never were, we are trying to create new ones to replace them.

    Oh the irony.

    1. Yet Another Anonymous coward Silver badge

      >If you assign a sentence to a particular crime, and that sentence is weighted by previous convictions

      That as the problem, the sentence was weighted by previous police interactions (among other things)

    2. holmegm

      "The bias is there, in the many areas of media, government, people in areas and so on."

      I'm not sure what "people in areas" are, but if the AI was trained solely on (fictional) media images and on government pronouncements, it would think that every criminal is a lily white businessman.

    3. katrinab Silver badge

      But that means that for example if you steal a cake from Patisserie Valerie, you would get a few months in prison, but if you steal one month's pay from 900 members of staff, the police don't even look at it. Black people are more likely to steal cakes than steal wages because they don't generally get jobs where they would be in a position to be able to steal wages.

      1. Mark 85 Silver badge
        Alert

        Black people are more likely to steal cakes than steal wages because they don't generally get jobs where they would be in a position to be able to steal wages.

        I can hear the screams of 1000's of SJW's at that statement.

    4. I.Geller Bronze badge

      AI works not with SENTENCES but correlated paragraphs! AI structures them obtaining weighted patterns.

  8. Mr Humbug

    AI or ML

    Call it what you like. It's just a way of automatically repeating our past mistakes, but really quickly

    1. james_smith

      Re: AI or ML

      Ah, you've worked with Quants as well.

  9. David Roberts Silver badge
    Unhappy

    Data Regurgitation

    Just a tool for summarising a large amount of data.

    Not a tool for making logical decisions based on the data.

    1. VikiAi Silver badge

      Re: Data Regurgitation

      Call it a "Similarity Engine" since it is effectively pointing out similarities across vast tracts of data.

    2. I.Geller Bronze badge

      Re: Data Regurgitation

      Not at all!

      AI is about personalization, where each pattern from each paragraph of each text is explained by all other patterns. These annotations allow to create long tuples and find information on its meaning, where in mathematics a tuple is a finite ordered list of elements. (Speaking of AI tuples are sequences of patterns/ phrases.)

      1. I.Geller Bronze badge

        Re: Data Regurgitation

        Before mark my post by "thumb down" - please read today news?

        Appen Is Now Valued at USD 1.75 Billion as Investors Cheer 2018 Results.

        The Sydney-listed company, which supplies human-annotated datasets for machine learning and artificial intelligence to technology companies and governments, blew past expectations when it posted full-year 2018 results on February 25, 2019. Investors loved the results, sending Appen shares up 22%.

  10. Aristotles slow and dimwitted horse Silver badge

    To be honest...

    To be honest, in what I've read of it all, it is neither AI or even ML; but predictive analytics based on hand crafted and hand fed data sets.

    AI is a completely different beast and is unfortunately still sat well in the realms of science fiction.

    1. devTrail

      Re: To be honest...

      If you really wanted to be fussy you could also point out that and algorithm is a sequence of instructions, the result of a statistical analysis can hardly be defined an algorithm. So the definition of biased algorithms is incorrect.

      1. katrinab Silver badge

        Re: To be honest...

        There is an algorithm. It starts with instructing the computer to look at a set of data and perform various types of analysis on it. Then it does some calculations on another set of data and find which items in the first set of data it most closely matches. Then it carries out some action based on that. Ultimately, everything a computer does is boolean algebra.

        1. devTrail

          Re: To be honest...

          There is an algorithm. It starts with instructing the computer to look ...

          Yes, actually the training procedure is an algorithm. But every time I read an article or someone talking about the issue they always end up talking about the bias in the data, not in the training. Is the way you select the data an algorithm? I thought it was just about collecting all the possible data and then taking some subsamples with random sampling.

          I reckon that this is a broad issue and the definition is vague. In some cases it might fit in some cases it might not fit. But still I don't like calling them biased algorithms because it lets me think about flawed procedures.

          1. katrinab Silver badge

            Re: To be honest...

            Yes, the way you select the data is an algorithm, or certainly, the training data you use is part of the algorithm because it affects the result of the program.

            If you want to test whether a particular data-point causes a particular outcome, you need to have a reliable way of measuring it and you need to have a proper control. Otherwise it is no more reliable than examining the entrails of a goat like we did in the middle ages.

            1. Anonymous Coward
              Anonymous Coward

              Re: To be honest...

              "no more reliable than examining the entrails of a goat like we did in the middle ages."

              Or the "opinion poll" industry, or the pseudo-science of economics.

          2. Michael Wojcik Silver badge

            Re: To be honest...

            Is the way you select the data an algorithm? I thought it was just about collecting all the possible data and then taking some subsamples with random sampling.

            There's a huge body of work on ML training methods. It's not "just about" anything short enough to put in a forum post.

            You could spend several days just reading Adrian Colyer's summaries of ML-related papers in the morning paper archives. This is a field which has been around for decades and has been very active for the past one.

            (Also, I'll note that random sampling is an algorithm.)

    2. amanfromMars 1 Silver badge

      Re: To be honest...

      AI is a completely different beast and is unfortunately still sat well in the realms of science fiction. ..... Aristotles slow and dimwitted horse

      Oh please, surely you still don't believe in that and not realise the fictions before you with facts to record and chase and trace to source for verification and ratification of Almighty Internet Server Provision with Future Seeds and Feeds of Needs and Wants for Passion and Desire?

      Quantum Leaps have been made since way back then. AI Things now are Designedly Different.

      Hook Well into that Immaculate Driver in any Sphere or Bubble where Cupid and Venus CoHabit and Crash and you aint gonna want to leave. The Magic Question is whether to let Subterranean IT Escape Unsupervised and Unleashed, nest ce pas? A Walk in the Park For Ardent Walkers of Deep and Dark and Steamy Sides of Life for they would be of Kindred Spirit.

      And that's news of crazy developments fortunately in the Realms of Virtualised Fact. AI a completely different beast in deed indeed and lives outside the bounds of natural control with alien commands and first time prime timed timely experiences that blow all doubt away about the True Virtual Nature of Existence .... to Kingdom Come and Beyond. Perish the Thoughts.:-)

    3. Michael Wojcik Silver badge

      Re: To be honest...

      in what I've read of it all, it is neither AI or even ML; but predictive analytics based on hand crafted and hand fed data sets.

      You haven't read enough. Supervised learning is only one quite small subset of ML. And it is, in fact, machine learning, for some quite rigorous definitions of "learning".

      AI is a completely different beast

      Care to support that?

      It's easy, and vapid, to declare that there's some qualitative difference between ML and "intelligence". Far fewer people are willing to actually try to advance an argument.

      John Searle famously argued that approaches based on what he referred to as "symbolic manipulation" were qualitatively different from, and formally less powerful than, intelligence (based on what was in effect a phenomenological argument); but he also stated that he believed human intelligence was a mechanical phenomenon, and thus could in theory be, and he expected would eventually in practice be, duplicated by a human-built machine. That is an argument about the difference between an AI approach and intelligence.

      Roger Penrose famously argued that deterministic computers, or any mechanism not formally more powerful than a type-G logical system, is formally less powerful than human intelligence. I don't find his argument persuasive, but it's a fairly well-developed one. It's not just "doh, intelligence is something other than that thing which I think AI is".

      The Reg Commentariat are flush with pride in their ability to dismiss AI and ML with a variety of hackneyed, tired, inaccurate characterizations and unsupported generalizations. Sorry, kids, but you get no points for that.

  11. devTrail

    What's worse than the biased algorithm

    I remember seeing a talk published online. The female researcher showed the result of a google image search for the word 'doctor'. She said that the google algorithm was biased and complained about it because all the pictures showed male doctors. Trouble is that she was utterly wrong, the problem wasn't the fact that the doctors were male, the real issue was that the doctors were fake, google was just showing a lot of advertising pictures. The funny thing is that the audience applauded, nobody raised questions.

    The above is just one of the many examples that show that often the bias of those who judge an algorithm as biased is worse than the bias in the algorithm itself, Except for extreme cases like the American justice system most of the time it's a lot of fuss for small things.

    1. Mr Humbug

      Re: What's worse than the biased algorithm

      I tried that search in DuckDuckGo and I discovered that most doctors wear a lab coat, have a stethoscope hung round their neck and stand with their arms folded.

      The main exceptions seem to be Matt Smith, David Tenant, Peter Davidson, Peter Capaldi, ...

      Edited to add: obviously this is gender bias because you have to scroll down quite a lot to find Jodie Whittaker

      1. devTrail

        Re: What's worse than the biased algorithm

        Right. You pointed out that I might have a bias as well and this leads to another consideration. If you try to fix the bias in the data chances are that you end up imposing on the outcome the bias of one or few persons over the bias shared by millions of people. So we are back to the thread title.

        1. Mr Humbug

          Re: What's worse than the biased algorithm

          Actually I was agreeing with your point about the reliability of drawing conclusions from random internet search results :)

      2. bpfh Silver badge

        Re: What's worse than the biased algorithm

        Who needs a stethoscope when you have a tricorder^M^M^Msonic (screwdriver|sunglasses)?

    2. Anonymous Coward
      Anonymous Coward

      Re: What's worse than the biased algorithm

      You may be wrong and she may have a point. Type the Russian for doctor (врач) into yandex.ru and if you look at images you will see roughly a 50:50 mix of men and women.

      1. devTrail

        Re: What's worse than the biased algorithm

        The same reply I wrote above @Mr Humbug 40 minutes ago is valid for this comment.

    3. Prst. V.Jeltz Silver badge

      Re: What's worse than the biased algorithm

      "developed to assist judges in the US determine appropriate prison sentencing and the award of parole based on AI-churned data. "

      yeah , right , thats definately the first job we should give to "AI"

  12. hplasm Silver badge
    Facepalm

    AI

    Automated Idiocy.

    It's the future- today!

  13. Nick Kew Silver badge

    Eye of the beholder

    The 'bias' is simply the difference between today's prejudices and norms vs those of recent history. That is to say, those years whose data are used for training.

    To see such data as biased is to accept (consciously or otherwise) the values of a pressure group lobbying (rightly or wrongly, or most likely both) for social change.

  14. Franco Silver badge

    Won't stop the marketing droids trying to market everything as smart or intelligent or whichever buzzword they're using this week.

    In the meantime though, would anyone like any toast?

    1. Steve K Silver badge

      No...

      ...I'm a waffle man

      1. TomPhan

        Re: No...

        Perhaps you'd like a bagel?

        1. Spamfast Bronze badge

          Re: No...

          Strike a light! I'm a genius!

    2. Uncle Slacky Silver badge

      Still waitiing for Artificial People Personalities(tm), aka "Your Plastic Pal Who's Fun To Be With".

  15. mr-slappy

    It's Just Pattern Recognition

    It's not AI - it can't be because we don't even understand what intelligence is in humans, never mind in machines.

    It's not Machine Learning, because we don't really understand what learning is in humans either, never mind in machines. (I'm speaking as a school governor who spends a lot of time with teachers, many of whom are excellent, a few not so much. It's really complicated. If you could distill the essence of a really good teacher someone would have done it by now.)

    It's just advanced pattern recognition, operating from very large but inevitably biased and flawed data sets.

    1. Swarthy Silver badge

      Re: It's Just Pattern Recognition

      My theory is that intelligence is pattern recognition. Well, pattern recognition and predictions, several layers deep.

      We see a pattern and make a prediction based off of it, we then review the predictions for patterns, and predict our predictions, and note the patterns, and then we alter our future predictions to give a better pattern.

      And then we see Jesus in a grilled cheese.

      1. doublelayer Silver badge

        Re: It's Just Pattern Recognition

        Effectively, this is true. We see patterns in observation, then make predictions about how each possible action will affect the situation before choosing a set of actions to take. So any functioning artificial sapient system should also need this. However, pattern recognition and statistical analysis are slightly different, and human pattern recognition and limited pattern recognition based on a subset of available data are also quite different. I don't have as many problems with the term machine learning, because the creation of a model does learn from its training set. If the set is faulty, it will learn the wrong thing and use that, just as you could teach someone that circles have straight edges, the bright thing in the sky is called a tree, and certain types of people have ingrained qualities that can be applied to any other person in that category, and they will act on those flawed notions.

        1. donk1

          Re: It's Just Pattern Recognition

          I was able to walk across the road blindfolded between 3am and 4am hence I can walk across the road blindfolded anytime...go head! The stock market has been going up all year hence will always go up...hhmmm!

      2. Doctor Syntax Silver badge

        Re: It's Just Pattern Recognition

        "And then we see Jesus in a grilled cheese."

        No, intelligence is reminding oneself that it's just grilled cheese, not even a religious painting.

      3. Toni the terrible
        Devil

        Re: It's Just Pattern Recognition

        jesus is in a cheese sandwhich, He is every where, even in your Flat White!

    2. veti Silver badge

      Re: It's Just Pattern Recognition

      You're saying that we need to understand exactly what intelligence is, before we can create it?

      Counterpoint: your mother.

      I have yet to see anyone define intelligence in any form that holds water past a couple of rounds of analysis, and so I'm not willing to dismiss AI as readily as some people who seem to believe that intelligence is some kind of magic that's inherently impossible to create.

      1. Charles 9 Silver badge

        Re: It's Just Pattern Recognition

        No, not inherently impossible, just something so vague and incomplete that we ourselves don't know yet what intelligence really means.

        In simpler terms, how can we teach what we don't understand ourselves?

        1. BrownishMonstr

          Re: It's Just Pattern Recognition

          Isn't that what teachers do, though?

        2. Michael Wojcik Silver badge

          Re: It's Just Pattern Recognition

          how can we teach what we don't understand ourselves?

          Name any one phenomenon we understand completely.

          Don't get me wrong. I think we are far, far away from machine intelligence that's even roughly as powerful, for some reasonable set of metrics, as human intelligence; and if we do produce such a machine intelligence, I don't expect it to look (i.e. have visible attributes similar to) much like human intelligence. But as usual the cliched handwaving objections raised about AI in this forum have little significant content.

          1. Charles 9 Silver badge

            Re: It's Just Pattern Recognition

            The concept of binary: on/off, white/black, 1/0. If we don't understand this, we don't understand anything, AND it's the basis for computer logic, too.

            Of course, that doesn't exclude the possibility of things that cannot easily fit into a binary world. The infinite shades of gray and all. That's part of the reason Trolley Problems keep getting brought up; they represent a dilemma that requires a (usually binary) answer that no one can satisfy.

  16. Joe W

    "where is the intelligence"

    Coffee through the nose hurts. A lot.

    Now I'll read the rest of the article...

    1. Nick Kew Silver badge
      Holmes

      Re: "where is the intelligence"

      Intelligence is knowing better than to combine coffee with Dabbs.

  17. Gordon861
    FAIL

    […takes a slug of Relentless…]

    Does anyone still drink this stuff since they changed the recipe a while back ... it's now syrup.

  18. Version 1.0 Silver badge
    Unhappy

    Is it an oxymoron?

    I think it's just a marketing term for a poor database... no different really from "search engine" - who cares about Truth and Reality when you can market crap and make billions?

    1. Rich 11 Silver badge

      Re: Is it an oxymoron?

      who cares about Truth and Reality when you can market crap and make billions?

      Are we back on the subject of Trump University?

      1. sprograms

        Re: Is it an oxymoron?

        Perhaps. I thought it was referring to the synthetic mortgage-backed securities business, or perhaps the investment advisory industry.

  19. GX5000

    "He is also the only man in the world who can articulate the word "recidivism" mid-sentence without a few practice runs or pausing for a swig of Monster Energy between syllables."

    Way to go lowering the bar under water.

    No wonder my younger subordinates are all lost unless I explain in emojis.

  20. Justthefacts

    Logical fallacy alert.....

    So, current algorithms aren’t able to give answers matching the best of human thought. But that’s neither a reasonable requirement, nor a necessary one. Just like automated driving, they only have to match the *average* human. And the truth is, average humans are way more biased than we admit.

    When we recruit people, you think we genuinely hire the best? Or just people who match our judgement of the skills required, with a background similar to people we have previously seen perform well on the job? You think juries and judges are unbiased? These algorithms are presenting us evidence that if we examine data honestly, human decision making is not great and we are embarrassed about it.

    Of course, we should try to outperform, and decision making that tends to revert to mean can’t get us there. But the truth may well be that algorithms are no worse than the guy who slopes off early, or always approves loans to people he was at school with because their business plans seem sensible to him, or only ends up hiring white people not because they are white but because in each case they talk a good talk about being a team player in the interview - ie follow his norms.

    1. doublelayer Silver badge

      Re: Logical fallacy alert.....

      I'm not sure that's good enough. For self-driving cars, they should reach a safety level of an average driver before they're used, which they have done and exceeded in tests. That's why they are acceptable, though of course they need to verify that they'll pass those tests under more difficult conditions. However, even if we do get a system to perform judgements at the level of an average person (difficult to quantify for topics like bigotry), it can degrade the situation. If we can quantify negative events like this, we can also identify parts where their frequency is excessive, increasing the average. We can also find methods of reducing the likelihood of those events when things are more important, for example moving a trial of a person likely to face discrimination to a location with less connection. With an automatic tool, the parameters can't easily be changed without outright manipulating the result, and a great deal of oversight is needed to ensure that no unforeseen biases are impacting those who the model affects.

      A uniform mediocrity is not always enough, and that's still assuming we can achieve that with these tools. I think the evidence shows that, sometimes, we fail even to reach that threshold.

      1. Justthefacts

        Re: Logical fallacy alert.....

        I partly agree. What you’re saying is we need some supervisory oversight of the outcomes; where the supervisor has a higher expertise, and can analyse the outcomes that fall below the average level (which will often have some special characteristic that the lower-level decision maker hasn’t accounted for) and tweak the decision criteria of the lower-level decision maker to move its Normal Distribution upwards. That, I agree with.

        I also agree that higher authority needs to be human, and we shouldn’t defer to “computer knows best”. Plus, with classical AI it’s really difficult to tweak the parameters in a semantically meaningful way. That is, in ML terms, we don’t want to overfit, such that we are only training it to be more lenient to people with the same surname. So, yes, the evidence *does* show that sometimes we fail to meet that threshold.

        Where we differ:

        a) I don’t see how this differs from the current situation where in many fields we see “failing institutions” that cause serious harm and then we have public inquiries to correct them. Care homes that abuse their patients. Investment banks with cultures encouraging traders to manipulate interest rates. Hospitals that build up inventories of surgical waste, failing to realise that one persons logistical hitch is someone else’s mother post mortem.

        b) *Of course* AI would be expected to replace junior-level decision making first. We shouldn’t up-end or flatten our hierarchies of decision-making or appeal just because we automate one layer. Today, senior bank staff can overrule junior ones. But we need fewer senior staff than junior ones. And that applies *even amongst judges*. Most cases are routine.

        c) I think the *real* problem in the long-term is hollowing out of expertise. How do you grow an upper layer of really good decision makers, if there is no lower layer for them to grow from. We will get a set of people who have never been “on the ground” to work through the morass of easier decisions. They will get increasingly blinkered and academic. And that’s related to your point about manipulating results. When there are 10000 court officials, there is a variety of viewpoints and expertise, and they remain culturally connected by debate. If the easiest 99% of decisions are delegated to software, we only need a top layer 100 supervisors setting policy by parameter rule. That looks rather like an autocracy, and seems very vulnerable to single-point manipulation.

        One of the things that protects our democracy, is that the lower layers don’t always follow the rules set down by their supervisors. Ironically, the very feature that enables individuals to enforce their own bigoted ideas in opposition to societal morals, protects us from the dictats of dictators.

  21. Anonymous Coward
    Anonymous Coward

    Turkish pronouns

    Turkish does not have gender pronouns, no he or she or him or her exists in Turkish, only one word for all. Anecdotal proof that you can have a very sexist society without gender pronouns! All that he/she business cracks me up having known this for years...

  22. SVV Silver badge

    Data bias

    It's sightly incorrect to say that the data is biased in the example given here. The data is accurate and produces a correct answer with whatever statistical analysis you choose to run on it. It is better to say that the data reflects an underlying bias (prejudice in average sentences in the case described).

    It is nice to keep emphasising that this sort of data processing has nothing to do with intelligence, other than the intelligence of people working out how to do the analysis. It is also far from new, the insurance industry for example being entirely based on the abiity to use statistical analysis to calculate risk depending on a combination of facts - that industry needs to be as "biased" as possible in order to maximise profits.

  23. naive

    Bias, facts and statistics

    Like it or not, all AI will be based on statistics and things people know.

    Suppose the situation where one has to share a hotel room with either a rabbit or a lion.

    Any sensible AI system aiding in deciding which room would offer the best experience would probably recommend a room with a rabbit as roommate.

    Since the AI system would base its decision on information like "lions are large carnivorous predators with large teeth", thus the fluffy bunny is probably preferable above the lion.

    Is it interesting to see wether this is considered to be biased information.

    1. John G Imrie Silver badge

      Re: Bias, facts and statistics

      Feed it Monty Python's Holly Grail, then see what it thinks about Rabbits

      -- Tim (the Enchanter)

    2. Doctor Syntax Silver badge

      Re: Bias, facts and statistics

      Since the AI system would base its decision on information like "lions are large carnivorous predators with large teeth", thus the fluffy bunny is probably preferable above the lion.

      An AI basing its decision on no more than this would just as likely prefer lion or refuse to give an answer at all. In order to give a sensible answer it needs to "know" the implications of large carnivorous predators for human beings and, indeed, that the sharer is a human being.

      1. Doctor Syntax Silver badge

        Re: Bias, facts and statistics

        I should also have said that the AI needs to be instructed it's make the decision on behalf of the human, not the lion or rabbit. It's easy to take so much for granted.

        1. kirk_augustin@yahoo.com

          Re: Bias, facts and statistics

          Exactly. The computer will not know what a hotel room is, and for all a computer could come up with, it could be a circus act that requires lions. The problem being that computers never really know anything at all.

    3. kirk_augustin@yahoo.com

      Re: Bias, facts and statistics

      There is no way a computer can know information about lions and bunnys to be able to make decisions like that. It takes a human programmer to try to simulate reasonable choices based on data like size and danger, but that is far too unreliable to ever put to the test. So instead those sorts of choices should be left to humans, who have a built in value system and world knowledge.

  24. Anonymous Coward
    Anonymous Coward

    So - Garbage in, Garbage out? Just on bigger data sets?

  25. holmegm

    There's a pretty large assumption being made here that the data is unfairly biased, as opposed to simply reflecting an unpalatable reality.

    It's probably necessary to show your work.

  26. Ken Hagan Gold badge

    "If the AI were intelligent, it would work this out for itself. It's not so it doesn't."

    I'd dispute that. I'm always being told that *we* are intelligent, but the hard evidence is that millions of people have spent several thousand years on the problem and are only very slowly figuring it out.

    That's probably why we *still* don't have a definition of "intelligence" that isn't circular (with an embarrassingly small radius).

  27. hellwig Silver badge

    And Facebook wondered...

    Why it couldn't create a software algorithm to provide relevant impartial news and social media posts? The fallacy of the modern nerd is thinking they're smarter than everyone else despite evidence to the contrary.

    "Just because Einstein couldn't rationalize his theories on relativity without the cosmological constant, doesn't mean my Hemp-based dating app can't solve the mysteries of the universe!".

  28. Rich 10

    chaos theory

    and all those people in this thread looking up data in Google to provide examples to justify their answers here are introducing new biases in Google's predictive "AI" algorithm - at the end of the day the world is now different, just because of this one little query storm. When a (technical) butterfly flaps its wings in El Reg.....

  29. Doctor Syntax Silver badge

    Numbers and the like have an unfortunate effect on people. People tend to believe them. It leads to quoting results to infeasible levels of precision. It leads to measuring and acting on stuff which is easy to measure and ignoring the stuff which it more difficult to measure even if it's more meaningful (simple example is the setting of arbitrary speed limits and installing equipment to reinforce them whist ignoring tailgating).

  30. Anonymous Coward
    Anonymous Coward

    Given biased training data, do we not want the AI to be equally biased?

    Ie, with the doctor / nurse example. If the majority of training examples refer to doctors in the masculine and nurses in the feminine, assuming the training set is reasonably representative of real input, the likelihood is that this is precisely the translation the majority of users want / expect. The fact that the training data is biased, simply implies the end user is likely equally biased. If the AI were to deliberately choose to remove this bias, this is getting worryingly close to the machines attempting to impose their will on us... and I can see nothing but madness down that road.

    1. Charles 9 Silver badge

      Then how do you handle the edge cases (female doctors and male nurses) without a fuss about discrimination being thrown?

      1. holmegm

        "Then how do you handle the edge cases (female doctors and male nurses) without a fuss about discrimination being thrown?"

        We had a few female doctors and male nurses back when people assumed "he" and "she" for the generic cases. Somehow everyone survived this just fine.

  31. Garahag

    If machines are not intelligent, just algorithmic... are humans not algorithmic, or not intelligent?

    1. Charles 9 Silver badge

      The better question to ask is, "What is intelligence?" Because we don't even have a concise answer to that question yet.

    2. kirk_augustin@yahoo.com

      It does not at all matter if humans are also computers and algorithmic. The point is we have an inherent, built in and functioning value system, emotions, unambiguous data storage and retrieval system, instincts, pain/pleasure motivations, etc., that we likely will never understand or be able to program into a computers.

      We function in complex ways relevant to our inherent system of values, instincts, etc.

      Since computers can never share this exact set of instincts and values, they will never be relevant to us in terms of those human instincts and values.

      1. Charles 9 Silver badge

        I wouldn't say our value system is inherent because it's different from person to person. More that it's acquired but subconscious, thus why we don't understand it ourselves. As for our data storage, I wouldn't call it unambiguous given how easily we MIS-recall things (thus my constant password protest, "Was it correcthorsebatterystaple or donkeyenginepaperclipwrong?")

  32. bpfh Silver badge

    As for the nurse example...

    Well, this is also the great thing with English having mostly neutral nouns , and when there are male of female ones it is either implicit (ship is a “she” by convention in English, but “he” in French and Russian for example), so the translation needs to make some guesswork to Identify from a neutral noun to guess the one you want.

    So, “i talked to the nurse today” becomes “j'ai parlé à l'infirmière aujourd'hui”, but if you specify “i talked to the male nurse today”, it does change to the male sentence “j'ai parlé à l'infirmier aujourd'hui”, so you can overrule it if you need to by being explicit.

  33. Anonymous Coward
    Anonymous Coward

    The reason for calling it AI is simply to accelerate the current trend of justifying doing nasty things to other people with "the computer said".

  34. JeffyPoooh Silver badge
    Pint

    If these conclusions are...

    If these conclusions are shocking to you, then you're an AI fanboi.

    Although being a fanboi gives a warm and pleasant syrupy feeling inside the skull, it is not actually a good thing as it's the exact opposite of actually keeping your brain switched on. Many parallels with cults.

    (The AI-propelled spell checker in my device keeps insisting that the word fanboi should be spelled 'cannot'. Artificial Imbecile.)

  35. I.Geller Bronze badge

    From the discoverer: Machine Learning and Intelligence

    Machine Learning is the addition of structured texts, where each pattern is a direct analogue of the command of the programming language. Structured text sets the context and multiple subtexts for these patterns.

    As for intelligence: it has the ability to find, use, and modify sets of tuples, where in mathematics, a tuple is a final ordered list (sequence) of elements. I. e. speaking about intelligence we speak about sets of phrases, each of which is explained by a set of other phrases.

  36. StuntMisanthrope Bronze badge

    There isn’t any.

    That’s the point. Two types of path. Squid eye or pressure chemical with about a petabyte in use. Ranking FAQ cache, the 64 million dollar question. See me, for a bollocking, I thought it was live chat. #programadatelinecondition

  37. John Geek
    Trollface

    I've always called AI "Artificial Ignorance" and I've seen nothing to date to persuade me otherwise.

    1. I.Geller Bronze badge

      IBM Watson? Waymo? Google and Yandex translation?

      AI structures texts in patterns, where all patterns of the texts specify the context and subtext.

      1. H in The Hague

        "Google and Yandex translation?"

        Google Translate (GT) is actually a good example of where pattern matching fails. Its apparent translation is created by matching patterns in the source language with those in previous translations into the target language - a purely statistical process, devoid of any understanding, any intelligence.

        That often leads to translations which read surprisingly well, but may be incorrect. Example: a client provided me with a document in Dutch and mentioned they'd had that translated from French. It seemed to be perfectly good Dutch, but one sentence would have made much more sense had it included the word "not". That made me suspect they'd used GT or similar. So I requested the original French doc and even with my extremely rusty secondary school French I could see that the original document did indeed include 'not'.

        Tested it: GT French -> Dutch left 'not' out while GT French -> English correctly included 'not'.

        Another problem with GT is that it feeds off source texts and translations it finds on the Web - but increasingly those are its own work, hence you get the snake biting its own tail, reinforcing its mistakes. GT can be a useful tool but one to be used with caution by those who are aware of its workings and shortcomings.

        1. I.Geller Bronze badge

          Agree. The problem is that they don't have personal profiles, which contain used by the person texts and their structured representations. That is, all available texts are used instead of those used by a particular person. But still there is some result.

  38. Herby Silver badge

    Isn't!

    Well that is what "Artificial Intelligence" is these days.

    Pretty simple if you ask me.

  39. Muscleguy Silver badge

    Recidivism Fail

    I expect there are a legion of criminologist out there who spout 'recidivism' all day and every day without pause. What a stupid comment from someone who sneers at how English as a second (third? fourth?) language speakers pronounce 'algorithm'. Which like al cohol is an Arabic word originally and the al is a separable prefix, so al-gor-ithm is perfectly correct for many.

    That's just the ignorances so far, let's see how many more I can find . . .

    Sometime British scribes might learn that it is not necessary to sneer at others when writing articles.

    1. Toni the terrible

      Re: Recidivism Fail

      It's not sneering, its their cognative bias

  40. Captain Kephart

    AI is an approcah, not an outcome

    The scary thing is that many politicians and opinion-formers really think that current machines are 'intelligent' enough for humanity to let them make decisions for us ... and the machines have no notion of that.

    The best definition of intelligence I ever heard was from Prof Igor Aleksander (who had a face-recognition and speaking neural network at Brunel University in the UK in 1983). He said there is no such thing as 'artificial intelligence' - just intelligence. He felt that the problem with AI had / has been that its practitioners thought that it was something you programmed – of the style of:

    FOR 1 to n; BE INTELLIGENT; LOOP

    and that this was always nonsense.

    The Six Laws of Intelligence

    Instead, Igor said (I am paraphrasing his deep discourse), you have intelligence when you:

    1) are self-aware, and aware that you are self-aware;

    2) able to sense the world and other beings and perceive that they are self-aware;

    3) can appreciate that they have different motivations and views of the world to yourself;

    4) can conceive of what their view(s) of the world may be;

    5) can reason from those points of view and synthesise them with your own ...

    6) and lastly be able to act, interact, and effect change in the world in line with those things - anticipating, adapting and changing over time - and so changing the nature of your intelligence in line with the real-world context.

    The various kinds of simulations and emulations of ‘intelligent’ behaviour succeed as far as they do because of the human ability to anthropomorphise and attribute intelligence where it does not exist (think of Tamagotchi as a more extreme example). We even do it with objects in our homes (such as cuddly toys).

    There is only Intelligence - AI is an approach not an outcome. This is because intelligence is really a social phenomenon (not an individual property) arising out of meaningful and reciprocal relationships over time – even the famous ‘Turing Test’ is set in a social context - and computers have no idea about that, and are nowhere near achieving it.

    Don't get me started on Alexa, Siri etc ... "Alexa, review and edit this post for me." ...

    1. I.Geller Bronze badge

      Re: AI is an approcah, not an outcome

      What you see today and call AI is only its basis, which was originally intended for commercial use - how to find information. That is, it was based on the definition of NIST TREC definition: AI must answer both Factoid and Definition questions.

      Google appeared on the earlier version of AI, which used the traditional n-gram parsing. The difference between this AI from which Google uses - in the new AI-parsing.

      The new AI-parsing allows AI to structure texts into patterns, each of which is a direct analog of a programming language command, and which are contextually-subtext targeted. Then Machine Learning is the addition of new texts, if the old ones do not have the necessary patterns.

      If you want AI to be "able to act, interact, and effect change in the world in line with those things" - AI can.

      Now on Intelligence. It's into tuples, where in mathematics, a tuple is a finite ordered list of elements. In other words each pattern must be annotated/ explained because otherwise it cannot be found/ is not unique. Therefore our brain is biological computer which keeps sets of tuples, and AI does the same. Yes, there is no such thing as 'artificial intelligence' - just intelligence.

    2. Toni the terrible

      Re: AI is an approcah, not an outcome

      Therefore small children are not intelligent

      1. I.Geller Bronze badge

        Re: AI is an approcah, not an outcome

        Intelligence is a process. If you use External Relation theory's postulates and consider the child as a constant - no, he is not intelligent. For Internal theory he becomes older and intelligent.

        1. I.Geller Bronze badge

          Re: AI is an approcah, not an outcome

          All the chatterers, who commented on this article, are trying to appeal arithmetic to differential functions. That is, intelligence is about becoming - and any attempt to stop and define it (in the term of constants) doomed to disaster.

          AI was created as a differential function which becomes, it cannot be described in the finite terms:

          - AI is sets of tuples, where each tuple is a number of phrases;

          - while, the most vivid example, SQL operate with words and never with the tuples as sets of phrases.

          In other words, AI is a continuously changing set of tuples, while all without one exception, the existing theories operate with separate words, constant.

    3. Michael Wojcik Silver badge

      Re: AI is an approcah, not an outcome

      even the famous ‘Turing Test’ is set in a social context - and computers have no idea about that, and are nowhere near achieving it

      There are chatbots which have beaten human judges in Imitation Game (aka "Turing Test") challenges. Those challenges are inevitably limited - they have time limits, at least - and given, say, several months to interact with members of an Imitation Game panel the judges would probably eventually distinguish the participants correctly, at least with decent probability. But under the terms in which those contests were conducted, the 'bots won.

      People who actually work in AI / ML are not particularly interested in those results, because they're not particularly interesting. Turing didn't intend for people to hold real Imitation Game events. It's a philosophical thought experiment.

      Basically, it's an argument for the sort of view of intelligence that might emerge from the American pragmatist school of epistemology: we know an entity X is a member of class Y because it exhibits the visible attributes of members of that class. We treat things as black boxes and concern ourselves with how they interact with the world.

      It's interesting to contrast Turing's position with John Searle's in his Chinese Room argument, which is essentially a logical-positivist and phenomenological one. Searle says, in effect, "I'm not sure exactly what I mean by 'thinking', but this description of what one approach to AI is doing isn't it". (Logical positivism asks "what do we mean by the term 'X'?", and phenomenology asks "what are we doing in our minds when we do Y?".) So Searle does want us to consider what's happening in the box, and whether we think it might be similar to what seems to happen in our minds.

      It's mildly ironic that the Englishman Turing leaned toward an American philosophical school, while the American Searle toward one most closely associated with the UK. But then we hope our better thinkers will reach outside of whatever's popular in their own playgrounds.

      Robert French, among others, has pointed out (in a piece in CACM some years ago) why the Imitation Game isn't a useful practical test of machine intelligence. Appealing to it at this point in the game doesn't really help, except as a touchstone to illuminate your philosophical position.

      Aleksander's definition of intelligence which you summarized above won't satisfy everyone, but it's one that can be argued for. I don't have any great objection to it myself, though I'm not ready to endorse it either. It's interesting to note that it combines logical-positivist and phenomenological criteria (items 1-5, for the most part) with a pragmatic one (6).

    4. I.Geller Bronze badge

      Re: AI is an approcah, not an outcome

      There must be a clear separation between thinking and its result.

      Thinking is a process, a function; and its result is the limit of the function - the moment when the function loses its character and becomes something completely different.

      AI, for example, understands the paragraphs in the sense that they contain complete thoughts, that is, they are integrals in their relation to thinking; where dictionary definitions for the paragraphs' words form constant when integrated.

      1. Charles 9 Silver badge

        Re: AI is an approcah, not an outcome

        "AI, for example, understands the paragraphs in the sense that they contain complete thoughts, that is, they are integrals in their relation to thinking; where dictionary definitions for the paragraphs' words form constant when integrated."

        And therein the human condition can throw it off. What if it's a poorly-written paragraph that never comes to a concise point?

        1. I.Geller Bronze badge

          Re: AI is an approcah, not an outcome

          The quality is very important, a paragraph can be equated to a dictionary definition for all its words.

          AI technology first of all is a search for information technology, it appeared as the answer to NIST TREC QA challenge. For example Google searches using the earliest version of AI, which had not AI-parsing technology and couldn't convert paragraphs into dictionary definitions. Thus Google cannot search by sense (that is to structure the paragraphs), and completely rely on popularity.

          The quality is essential! The better the paragraph the better it defines all its words' meanings/ create right tuples (where a tuple is a finite ordered list of the paragraph's phrases)

  41. Ted's Toy

    I would like to have real intelligence

    All this talk of AI when I went to school in the dim and distance past we were taught that the real thing was what one needed, not an artificial something.

    1. amanfromMars 1 Silver badge

      Re: I would like to have real intelligence

      As a worthy substitute and wonderful surprise, is greater imagination a blessing to be constantly thoroughly worshipped in Effective Tempestuous Admiration of the Mighty Steal.... Absolutely Fabulous Imagination Leading what's Real and Realisable in Proprietary Intellectual Property Hosting Communities.

      Everything Good with Nothing Bad would Immediately Server a Perfect New Beginning in a Wholly Novel Different Direction.

      Something easily hosted and programmed for Present Media to Squawk is Everywhere is their Trafalgar Turing Test.

      All this talk of AI when I went to school in the dim and distance past we were taught that the real thing was what one needed, not an artificial something. .... Ted's Toy

      Absolutely Fabulous Imaginations are Raw Core Virgin Source Ore for Refining and DeepMetaData BaseMining. Beware the dragons and goons and say Hi to All Friendly Looney Toons is good sound advice to follow to steer well clear of so some really soggy boggy territory with a persistent grip/unholy attraction/devilish interest/insatiable reward. They be Heavenly Dues, Surely.

      What/Who Leads your Current, Intelligence Communities? Are there UNSeen Hands, Stout Hearts and Greater Minds at Play for Y'All from Today and Henceforth?

      :-) And does that Generate Permanence for a Persistent Presence? And Virtual AIRealisation of All of the Aforementioned Facts for Future Fictions delivering Naked Virgin Territories for Popular Colonisations ....... Exciting Future Builds. .... you know, that Deep See and Proud Vision Stuff of Kings and Queens where Perfect Adams Adore Serving and Servering Immaculate Eves Desires and Wishes to Mutually Satisfying EMPowering Satisfaction.

      Have the Key to that Store and Heaven knows it can wait whilst all Hell breaks loose as Future Derivatives and Virtual Options are considered a Must Have Vital Commodity.

  42. kirk_augustin@yahoo.com

    Artificial, as in Fake Intelligence

    What people have been told to expect from Artificial Intelligence is that a computer will become self aware, and become artificially like a human being.

    But that is not the case, nor ever can be. That is because a computer has no instincts, emotions, autonomic nervous system, or anything remotely alive or possible to become sentient, ever. All artificial intelligence really is, is what ever a programmer decides to put into his fake simulation. And that can never be real sentience because no programmer likely will ever know how to do that.

    Let me give you an example. If you say the word "dog" to an English speaking human, they will receive all sorts of associated memories, data, images, etc., that will include the dogs you have seen, read about, etc. But if the word "canine" is used instead, you likely get the same associated responses. In fact, you can use another language even, and it won't matter. That means this is not at all like a database program based on text keys. Guess what? That means humans are BORN with a built in semantic. We all internally have some unambiguous internal representation for dogs, that is identical in all humans.

    So if you understood that, then you would understand that before we could actually ever duplicate what humans do naturally, we would have to somehow figure out how humans do it. And that likely is never going to happen. So forget about artificial intelligence. It is not likely ever going to happen.

    1. Charles 9 Silver badge

      Re: Artificial, as in Fake Intelligence

      "Guess what? That means humans are BORN with a built in semantic. We all internally have some unambiguous internal representation for dogs, that is identical in all humans."

      I disagree. We only perceive this because we normally associate with people who are much like us: seen much of what we've seen, including dogs. But what if you head out to the sticks, to peoples who have such limited experiences that they may not recognize such simple things as a pet dog or a housecat...or even a ball. About the only things we recognize solely on instinct (tested on newborns who have the least life experience possible) is another human (and that's likely a survival trait).

    2. Michael Wojcik Silver badge

      Re: Artificial, as in Fake Intelligence

      That is because a computer has no instincts, emotions, autonomic nervous system, or anything remotely alive or possible to become sentient, ever

      OK, now explain how human beings have instincts, emotions, and sentience.1 Are our minds not effects of mechanical processes? What prevents us from creating an artificial device which produces similar effects?

      (And I have no idea why you lumped "autonomic nervous system" in there.)

      1Perhaps you mean sapience? It's easy to argue that some machines already have sentience.

  43. kirk_augustin@yahoo.com

    No AI, But People Expect Autonomous Cars?

    All the post here seem to pretty much agree that there is no such thing as Artificial Intelligence, and it will be too difficult to ever come close to what humans do so easily. But then what is so strange is that for some reason people have such a desire for autonomous vehicles that they think it is actually possible or even happening right now. I assume you that there is no such thing as autonomous vehicles. They are all fakes running on GPS, and can not recognize or read street name signs, know where lanes, are or recognize turn signals or brake lights. So then why is it people have this unrealistic disconnect? Perhaps we are lazy or incredibly gullible?

    1. Easy E

      Re: No AI, But People Expect Autonomous Cars?

      The other aspect people fail to consider is that these cars currently operate in only good weather. They aren't subject to (un)intentional sensor attacks. The systems are typically tied into the infotainment system (seems like that would be a clearly obvious no-no for a plethora of security reasons). There isn't a public prediction tree for which decision a vehicle will make if it comes across a bad situation, if the choices amount to running into an oncoming car, ditch or tree when it detects an 'accident' situation. What bad choice will the vehicle make and why? Will a semi hit a bus head-on as opposed to striking a bridge column? Will it strike a pedestrian or hit a concrete power column? Will it drive through a wild fire because it doesn't know any better? Will it stop and turn off if exhaust gets into the car's cabin? How will it know to drive at a slower rate of speed if there's a risk of black ice? What is an acceptable rate of failure, because auto manufacturer's aren't going to be held to 99.999% as it relates to faults?

    2. Michael Wojcik Silver badge

      Re: No AI, But People Expect Autonomous Cars?

      All the post here seem to pretty much agree that there is no such thing as Artificial Intelligence

      "Several people who seem to agree with me seem to agree with me. Must be true!"

  44. Anonymous Coward
    Anonymous Coward

    Real world manufacturing and A.I.

    Working in a manufacturing sector with metal machining as the primary function at the facility I am currently located, it becomes annoying hearing stock pundits state (with absolute certainty) that these manufacturing jobs will be replaced in a matter of years with A.I. As if... we have problems with simple automation due to tight tolerances. Human monitoring and interaction are required even if the machines are running properly because the machines can't predict how different parts will match up because there are too many variables. Only a few of us can consistently recognize how each of the machines will behave and it's difficult enough to get a skilled person to understand more than the production line they're assigned to.

  45. Roy Lofquist

    Douglas Adams

    Douglas Adams had thoughts about AI. This, from "Dirk Gently's Holistic Detective Agency", gets at the difference between mere deduction and mysterious intuition:

    “Sir Isaac Newton, renowned inventor of the milled-edge coin and the catflap!"

    "The what?" said Richard.

    "The catflap! A device of the utmost cunning, perspicuity and invention. It is a door within a door, you see, a ..."

    "Yes," said Richard, "there was also the small matter of gravity."

    "Gravity," said Dirk with a slightly dismissed shrug, "yes, there was that as well, I suppose. Though that, of course, was merely a discovery. It was there to be discovered." ... "You see?" he said dropping his cigarette butt, "They even keep it on at weekends. Someone was bound to notice sooner or later. But the catflap ... ah, there is a very different matter. Invention, pure creative invention. It is a door within a door, you see.”

  46. loco_wunee

    Define intelligence? Ok...

    Here's how I will know that a robot is as smart as me: My ideal robot would, without flinching, without prompting, immediately scratch any itch on my body before my hand can reach. It will scratch vigorously, sufficiently, and comfortably any region of my body, on or under the skin, and with such discreetness and moral sensitivity that I can take this robot to any social gathering, I can be in any state of dress, and never slow me down.

    Show me the scratch.

    Then we'll talk about intelligence.

    1. amanfromMars 1 Silver badge

      Re: Define intelligence? Ok... how about Knock Out Future Opportunities

      ..... in Virtual Space Control Space

      How about, loco_wunee, Virtual AIMachinery whose Ardent Attention to Satisfying Desires with Insatiable Needs and Prime Feeds is Practically Almighty.

      And Ideal for Pleasure Robots Servering to Venus her Bounty and Immaculate Captures/Free Will Slave Followings to AIMetaPhysical Unions ..... Cleared Cyber Spaces.

      In those places, loco_wunee, you do what you want to unfold what you need to feed and seed and surely concede is a Secure AI Future Supply, and for Secessions*, a Perfect IntelAIgent Partner to be in Bed and Embedded with.

      * Traditionary Hierarchical Structure Implosions

      And that's only scratching at what's further there, l_w. Such would normally constitute an Earthly Pow Wow .... some United Nations ProAction ...... advising of Ongoing Strange Virtual ACTivities in which they Require Immediate Immense Assistance.

      The reply as to whether they do or they don't is irrelevant, for both are adequately answered with a perfectly honourable replies to the Posit.

      Now, tell me that is not a disturbance in the force, Mr Beale?:-) ..... https://youtu.be/yuBe93FMiJc

      1. Toni the terrible

        Re: Define intelligence? Ok... how about Knock Out Future Opportunities

        You are surely a Man from Mars

  47. CreActive

    Broken link

    Moderators: the COMPAS link points to the wrong article

  48. I.Geller Bronze badge

    Again

    Intelligence is based on sets of tuples; where in mathematics a tuple is a final ordered list of elements (patterns, phrases). The longer the tuple - the more unique it is, it can be more easily distinguished and found. Thus, the brain has the tuples and the connections between them, and intelligence is the ability to find the right tuples and apply them. If there are none - a search and addition for new tuples performed.

    This is our intelligence and AI basics.

    Stop to fantasize and begin to read what is already published! You all look ridiculous!

  49. John Brown (no body) Silver badge

    I'll believe...

    I'll believe we are at the beginnings of AI when mapping software is capable of telling me that the nearest $Store location is the one within 3 miles by road of my location instead of the one half a mile away on the other side of the river that requires a detour 3 miles down river to the tunnel then 3 miles back up river, or the alternate route 5 mile up river to the bridge then 5 miles back down river.

  50. Glenturret Single Malt

    Recidivism my algorithm

    The "cidivism" part of recidivism is pronounced with the same emphases as algorithm (and its anagram logarithm) so if you can pronounce one you pronounce the other. (OK, I know it was a joke, but just saying).

  51. itzman
    Boffin

    The fundamental problem is that...

    ...Bias and prejudice are efficient.

    See a snake? Kill it or run like heck. Who CARES that 80% of snakes are perfectly safe. Killing a safe one doesn't harm you. Cuddling a rattler does. Regarding all snakes - or indeed mushrooms - as poisonous saves you having to carry around a catalogue of the very few that are not.

    Most [issues] are caused by [a few identifiable members of set x] is most easily encapsulated as

    All members of [set x ] cause [issues]

    What we are seeing in this is the prime example of 'its not fair' versus 'it doesn't work'

    E.g. if you want to halt the spread of Islamic fundamentalism and radicalisation, ban the religion, beards. burkahs, niqabs, hajibs , imprison anyone who preaches it and shut down any mosque or website that carries any materials. etc.

    Unfair, but effective,..

    The human mind seeks to use pattern recognition to arrange the world into 'objects' that have 'generic properties' . So it can apply generically effective general rules without having to examine the particular.

    There are those who are stupid enough to feel ashamed of their propensity to do this and project the negative aspects of this onto others.

    Don't be one of them.

    Wisdom comes from accepting and then making allowances for the fact that we are all prejudiced and biased, and if we were not we would have eaten the poisonous mushroom years ago.

    Those who claim it is others who are bigoted, are usually the worst bigots themselves.

    One thinks instantly of the jackboot mentality of self styled 'anti-fascist' organisations.

    1. Anonymous Coward
      Anonymous Coward

      Re: The fundamental problem is that...

      Until you realize those "efficiencies" come at a price that you may not realize at first.

      Kill that "safe" snake? Now you're gonna have a rodent problem because you knocked off one of their more-efficient predators.

      Prejudice against a whole bunch because of a few bad apples? You've just tarred everyone with the same brush. Be prepared to deal with more of the same. If people say they're evil, they may as well become evil.

  52. DerekCurrie Bronze badge
    Holmes

    It's advanced Expert System algorithms

    Thank you Alistair for hitting the important points about what is euphorically, unrealistically called 'Artificial Intelligence'. "AI" is just another meme being used by marketing divisions and self-promoting researchers as bait to sell what are actually advanced Expert System algorithms.

    https://en.wikipedia.org/wiki/Expert_system

    Expert system design was introduced in 1965. It has been slowly advancing thereafter. What has triggered the 'AI' moniker are actually advances in speech recognition (speech, to text, to bits) and the reverse (bits, to text, to speech). Much of the speech recognition work was accomplished by Dragon Systems, starting in 1975. Dragon is currently owned by Nuance. As far as I am aware, all of the current popular 'AI' systems involving speech recognition license base technology from Nuance. Apple and IBM, among others, developed their own speech recognition systems in the 1990s. But they fell by the wayside. It is said the Google and Microsoft have been developing their own speech recognition systems. However, I strongly suspect they both use Dragon technology.

    https://en.wikipedia.org/wiki/Dragon_NaturallySpeaking

    The general concept of Expert Systems is to take a provided input and efficiently traverse an available database for the best matching output. What has become interesting in recent years has been what's called 'machine learning' whereby the algorithms are able to collect new data and add it to their source database over time in pursuit of providing better and more up-to-date output as answers to input queries.

    In other words, there is no 'intelligence' at all apart from that imbued by the euphoric, the meme entranced, marketing executives and researchers in need of academic recognition. We may in fact never be able to create anything that is actually 'intelligent' in the science fiction sense. The first goal of artificial intelligence is to be able to interact with it and not be able to tell if it is human or machine. This is called the Turing test. Thus the connection of AI to Alan Turing. And no, so far there has been no so-called 'artificial intelligence' system capable of passing the Turing test. There have been rumors of such a system for decades. But none of them have qualified. No doubt, with time the term 'intelligence' will be bent and twisted to fit the latest attempt at AI. However, the technology we currently have better qualifies as Artificial Idiocy with the goal of making it as mildly idiotic as possible.

    https://en.wikipedia.org/wiki/Turing_test

    ETHICS: As ever, technology is a tool for the betterment of mankind. Any technology used instead as a weapon of murder and destruction is no longer a tool but an abomination, a victim of self-destructive, territorial and tribal instincts still persistent in species Homo sapiens sapiens ('wise wise', as we wish we were). I call it Coward Murder Machinery, as currently exemplified by killer military drones controlled by humans from a distance with no danger to themselves. Another step in coward murder machinery will be autonomous drones and other robots whereby their creators and controllers will attempt to dodge responsibility for their outcome. Don't be fooled. Mankind will always be responsible for whatever technology we create, including all its consequences. We will become actual 'sapiens sapiens' when we achieve the wisdom to stop murdering one another for any reason.

    Conclusion: Ethics begin and end with the creators, programmers and users of technology, not the technology itself. IOW: Don't blame Hal 9000. Blame its inventors and programmers. Hal 9000 or any other 'AI' is just a tool.

    https://en.wikipedia.org/wiki/HAL_9000

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019