back to article AI is all trendy and fun – but it's still a long way from true intelligence, Facebook boffins admit

Researchers at Facebook have attempted to build a machine capable of reasoning from text – but their latest paper shows true machine intelligence still has a long way to go. The idea that one day AI will dominate Earth and bring humans to their knees as it becomes super-intelligent is a genuine concern right now. Not only is …

  1. Anonymous Coward
    Anonymous Coward

    Facebook boffins

    Oxymorn alert.

    1. Geoff Campbell
      Facepalm

      Re: Facebook boffins

      Really? You've met them, then?

      GJC

    2. Anonymous Coward
      Anonymous Coward

      Re: Facebook boffins

      Is the word "boffins" as in "Facebook boffins" Latin for "bullshit"

  2. Mage Silver badge
    Coat

    Pulling open the curtain

    It's not a powerful wizard, just a showman.

    Basically this isn't AI at all, but no different to 1980s "Expert Systems", just using different programming, basically a human curated database with human defined rules.

    1. LionelB

      Re: Pulling open the curtain

      I agree that this sounds like a retrograde approach - like 80s "GOFAI" (Good Old Fashioned AI), which pretty much hit a brick wall back then.

      Much more promising is the Deep Learning approach, where systems are designed to learn to make their own rules from interaction with the environment.

    2. PatientOne

      Re: Pulling open the curtain

      'Basically this isn't AI at all, but no different to 1980s "Expert Systems"'

      Pretty much what I was going to say.

      Expert Systems (ES) work by following set rules, or a model. It parses input and calculated the probable result, and is consistent in this approach. That model, however, is created for the ES: It does not build the model itself. Advanced ES can adjust the model within parameters if results show the predictions made are inaccurate, but they still have to fit within the bounds of the supplied model and rules.

      For Artificial intelligence (AI) that model would be adjusted as the AI learns: It would process the data as per the model, then compare the predicted result with the acutal result and start shifting weightings in the probabilities. In medical terms, this would be the process of taking symptoms and calculating the cause. The more cases presented, the better the model will be, but the AI could scrap the model entirely and build a new one from the raw data if needed: Something an ES can't do.

      For humans: We cheat. We are as likely to miss details and skip steps in processing information. This is both a strength and a weakness in the human brain and why we fail to realise things at first glance but rather it can take several moments to realise (There is a bicycle approaching; That is a man in a dress; That car isn't going to stop; That is someone I know). As a result, we can react quickly to the unusual, but we can miss things along the way. It's wired into us thanks to evolution: If you can't react quickly to a potential danger, you don't survive, but if it's safe then take all the time you need to check and double check and realise you were wrong in your initial assumption and that root vegetable really doesn't look like someone's face.

      So there has been a choice: To develop AI to be consistent in accuracy or to mimic the human brain and accept it will make mistakes. The last I heard, the aim was to remain accurate: We've enough natural stupidity without introducing more artificially.

    3. genghis_uk

      Re: Pulling open the curtain

      For me it was early 90's Expert Systems but much the same...

      We were doing this sort of thing in Prolog when I was at university. On the hardware side, the parallel gated logic looks like a PAL from the same era, not even up to FPGA complexity where at least you could get some interesting interconnects and feedback for training.

  3. RIBrsiq
    Holmes

    There's still a long way to go, of course, but I don't keep track of every stimulus, either.

    And, doubtless, had I had a different upbringing, I would have different priorities as to what is worth tracking.

  4. WatAWorld

    AI intelligence a long way from being real intelligence

    Which means AI intelligence can currently only be used to replace CEOs?

  5. thomas k

    "To be fair, the same goes for many humans, too"

    That was my first thought when I read the headline.

  6. Primus Secundus Tertius Silver badge

    Artificial stupidity

    I suggest the phrase 'articial intelligence' be replaced by 'artificial stupidity'.

    This would enable researchers to claim great successes when filling in their next grant applications.

  7. badger31

    Knowledge ≠ understanding

    We are struggling at the moment to get machines to learn some knowledge without forgetting it when it learns something new. When (if) machines actually understand that knowledge, then things will get really interesting.

    Also, in pedant mode, its "forward model", not "forward mode"

    1. LionelB

      Re: Knowledge ≠ understanding

      When (if) machines actually understand that knowledge, then things will get really interesting.

      What would it mean for a machine to "understand" something? How would you know?

      I'm not sure intelligence, artificial or biological, is really about "knowledge" and/or "understanding". Maybe it's more about how to interact with a rich environment (including other agents, intelligent or not).

    2. breakfast

      Re: Knowledge ≠ understanding

      This is where a lot of people in AI research go wrong in my view. They are treating the problems of intelligence as technical, when the underlying questions that we need to answer are overwhelmingly philosophical.

      That is bad news in terms of getting reliable answers because philosophers seem to be pretty bad at that, but until we have a much clearer idea about what consciousness and understanding are, how can we imagine we could simulate them. Even if one relies on the concept of consciousness as an emergent property of the system, relying on something mysterious appearing in a system of sufficient complexity seems little different from superstition.

  8. Uffish

    More money than sense.

    "Researchers at Facebook have attempted to build a machine capable of reasoning from text".

    That says it all really. Cretins.

  9. Anonymous Coward
    Facepalm

    Risks of AI?

    Shame that all those boffins are studying the risks of a fantasy technology (a dead horse thoroughly beaten by sci-fi authors) while dumb automation steadily turns the world to shit.

    1. LionelB

      Re: Risks of AI?

      Shame that all those boffins* are studying the risks of a fantasy technology (a dead horse thoroughly beaten by sci-fi authors) while dumb automation steadily turns the world to shit.

      Easy exercise: think about some other "fantasy technologies" that look ... a little less fantastical with hindsight of, say, a few of decades - or some other horses flogged to death by sci-fi authors (moon landings, atomic weapons, robotics, the internet, mobile communications, virtual reality, bio-prosthetics, machine translation, machine face recognition, genetic engineering, ...)

      *boffins = people who know more about shit than you do

      1. Anonymous Coward
        Anonymous Coward

        Re: Risks of AI?

        Ok I'm doing your easy exercise... Wow, I get it!! In hindsight AI looks even more fantastical than it did 50 years ago!

  10. Sleep deprived
    Facepalm

    Facebook builds "a machine capable of reasoning from text."

    If they feed said machine with Facebook texts, it might go crazy before even achieving intelligence...

  11. The Vociferous Time Waster

    AI is great with big data...

    Saw a great talk on this at TEDxCERN earlier in the month. Basically AI is great at big data decisions but not so good at the small data decisions.

    https://m.youtube.com/watch?v=IBoJcDlqmo0&feature=youtu.be

  12. The Vociferous Time Waster
    Terminator

    AI is great with big data...

    Saw a great talk on this at TEDxCERN earlier in the month. Basically AI is great at big data decisions but not so good at the small data decisions.

    https://m.youtube.com/watch?v=IBoJcDlqmo0&feature=youtu.be

  13. Anonymous Coward
    Anonymous Coward

    mathematics v language

    The "garden" example reminds me of an A Level Mathematics text book that used a reading comprehension exercise to show the difference in the rigour of mathematics compared to normal language.

    The answer that the AI bot should have given would depend on whether you're building an introvert or extrovert bot. The extrovert bot will say "garden", but the introvert bot will say "garden, assuming that Mary took the ball with her into the garden and no-one has removed it."

  14. Anonymous Coward
    Anonymous Coward

    Creation of Artificial Intelligence

    Its interesting that many intelligent humans struggle to create intelligent machines over many years, yet many of them 'believe' their own intelligence was due to random undirected changes over millions of years...

    Perhaps "I don't have enough faith to be an atheist" by Geisler, Turek, Limbaugh should include a discussion on AI...?

    1. Uffish

      Re: "their own intelligence"

      My take on the subject is that intelligence is the result of natural selection, good or bad luck in choosing one's parents, good or bad luck as a foetus, nurture, environment and choice.

      "Random undirected changes over millions of years" is an intentionally misleading phrase.

  15. Anonymous Coward
    Anonymous Coward

    Why...

    do some people not understand the difference between "intelligence" and "artificial intelligence" or AI?

    Intelligence refers to something which not only is self aware and has the ability to constantly learn and adapt but also understands how that new experience relates to themselves and others in the world (plus all the other knock on effects to other things which are not directly related). The emphasis of who am I? cannot be overstated enough in this.

    Artificial intelligence is something which can appear to be intelligent (even though by definition it is not) - for example chat bots. In much the same way that artificial leather looks like leather but is not leather.

    When will the press learn the difference and stop publishing nonsense AI stories that fear monger "Terminator" scenarios?

    1. Anonymous Coward
      Thumb Up

      Re: Why...

      > When will the press learn the difference

      Never? I absolutely agree with your distinction, but we're the minority. Most people have fallen into the mindset that Weak AI is useful today, and is rapidly approaching full intelligence.

      On second thought, AI Winter II should arrive any year now. Young kids today, growing up with the technology, instinctively scoff at AI. Adults can see the writing on the wall: companies are trying to use weak-ass AI for jobs that demand real intelligence, like driving. The true believers of the cult of academic-industrial AI (lol boffins) will be the last to wake up to reality. When they're working at Starbucks you'll know the AI bubble is over.

      AI Boffins, the new English Majors :)

  16. Tim Seventh
    Facepalm

    AI isn't that smart

    “Mary picked up the ball. Mary went to the garden. Where is the ball?” It should reply, “garden.”

    Actually, the most accurate answer to where is the ball should be 'in Mary's possession". There is no guarantee the language "went to the garden" define as 100% in the garden, where there is possibility that the ball is still outside the garden in that millisecond (ex: one hand holding the ball and hand is slightly outside the garden). If you've done animation, you'll know what I mean when it doesn't return true as it being in the garden.

    This example just tell us, the reason isn't the AI still has a long way to true intelligence, but people "are still pretty dumb" to see that their reasoning is really lacking true intelligence and trying implement their dumbness to the AI.

    Next time, try asking the AI, " I want Hot water". Just beware of the surprise pouring of 100 degree Hot water when it hasn't implement the assumption of temperature for 'Hot", and the definition of service or container for 'want'.

    1. returnstackerror

      Re: AI isn't that smart

      When AI can do standup improv comedy [well] then it will have reached its zenith.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019