back to article The biggest barrier to AI? It may be the AI companies themselves

It seems the answer to every problem these days is artificial intelligence. Want to reduce traffic congestion? You need AI. Cut down on fake news? AI. Understand your business better? AI. Create next-generation sliced bread? AI. But as has become repeatedly and painfully transparent – most recently with a live video of people …

  1. macjules Silver badge
    FAIL

    Two words for that ..

    YouTube and Facebook both said they weren't able to block all the Christchurch shooting videos because their systems hadn't seen this sort of content before

    Total Bollocks.

    Daesh execution videos and Daesh videos of children being taught to kill. Marjory Stoneman Douglas school shooting recorded onto Youtube. Were those videos banned? Were they hell.

    That Zuckerberg nose just keeps getting longer and longer ..

    1. I.Geller Bronze badge

      Re: Two words for that ..

      Do you know how much annotations cost? That everything should be explained/ annotated by texts? Manually?

  2. sorry, what?
    Stop

    I have just two words for this article...

    Machine Learning.

    This isn't AI. Stop being a sheep. Stop following marketing's stupidity. Stop abusing the term AI. We don't have AI. We have nothing close to AI.

    The fact that this article has "AI" stamped all over it just gave me a red mist in front of my eyes and I couldn't actually focus enough to take in what it was about.

    1. I.Geller Bronze badge

      Re: I have just two words for this article...

      According to NIST TREC the true AI should be able to answer Factoid and Definition (Other) question. AI can - see IBM Watson.

      You have problem? Contact NIST and explain them they are fools?

      1. sorry, what?
        FAIL

        Re: I have just two words for this article...

        Them are fools.

  3. jmch Silver badge

    theory fail

    "theoretically AI should be able to do the tasks everyone claims it can: modern technology, particularly software, should be able to identify and make sense of pretty much anything we as humans do, and to learn with extraordinary rapidity."

    AI is trying to mimic 'natural' intelligence, and 'natural' intelligence is not 100% reliable. All humans make mistakes, even sometimes on very simple tasks. I know that I've sometimes misread what looks like trivially easy words, occasionally dropped or fumbled objects that I normally find easy to manage, spilled glasses / mugs of liquids, tripped, stumbled, misspoke and sometimes generally made an arse of myself.

    The flip side of the huge power given by a flexible system that works 'well enough' on sometimes imperfect inputs, is that it sometimes screws up. And occasionally really fucks up big-time. The flip side from being able to learn from your mistakes is that mistakes will be made in the first place. Anyone working in AI should be willing to acknowledge that and know, up front, that any and every AI system will occasionally fail. And just like humans can be reliably fooled by things like optical illusions, any AI will be able to be fooled (as repeatedly shown in Register articles) and this will not go away *even if you build the best and most bad-ass AI system ever*.

    1. Anonymous Coward
      Anonymous Coward

      Re: theory fail

      Obviously, they are not looking for human-like intelligence, else they would just hire it off the street at minimum wage. They are, in fact, looking for super-human intelligence - i. e., what the best group of humans, each with a different specialized skill, would be capable of doing on the best days of their lives, doing it orders of magnitude faster, and doing all of that continuously. They know, and care, so little about natural intelligence, that it's no wonder they wander off into "brute force" approaches to one problem or another, with no resemblance as to how learning in general and the brain must actually work.

      All that's not to say that A.I. doesn't have potentially beneficial applications, only that their approach produces only functionally similar behavior, but in no way resembles intelligence as we understand it in humans and non-human critters.

  4. EBG

    Eh ?

    ...... making the design 1,000 times faster ......

    ....... proudly told potential customers that their chips would be 30 per cent more efficient ....

    Scratches head.

    1. jmch Silver badge

      Re: Eh ?

      '1,000 times faster' is not the same as 'can do 1000 times the work in the same time'

      also 'efficiency' is not quantified. is it work done per time, per power input, per cost?

  5. DougS Silver badge
    FAIL

    Waste of time

    The "if they just had more processing power" argument Untethered AI makes is just as dumb as "if they just had more training data". The problem is that current "AI" is woefully limited, and more of it doesn't improve it in any measurable way. The path to an AI that won't be fooled by editing/filtering/watermarking a video so that it looks "different" enough that it slips through, or better yet can recognize the difference between a mass shooting versus some good 'ol boys shooting up watermelons in the back 40 will not be found with current "AI" technology, no matter how much computational power you throw at it.

    1. HelpfulJohn

      Re: Waste of time

      Current "AI" isn't intelligence, it is a sorting mechanism on a look-up table. The programs digitises something, for example a face, then compares the string of numbers for that face to a dataset of other strings of numbers. It's not magic, it's not clever, it's not even smart.

      There is intelligence shown in "AI" systems but it is that of the designers who create marvelous algorithms to shorten the process of looking-up stuff with elegant code replacing the brute force approach we were trained to use in coding school.

      That said, making the look-up-and-compare programs more efficient is simple, make larger processors. Instead of tiny, little 64-bit CPU's which are a handicap and a bottleneck, manufacture 2048-bit CPU's or larger, which will be able to swallow data in great, huge chunks with each clock cycle.

      The *software* doesn't need to be engorged, expanded or embiggened much, just the I/O bits and the comparator sub-routines.

      Making the RAM part of the CPU's cache could also help though I don't know if that is feasible.

      Basically, I'm describing a thing that looks and feels more like a brain than do the fragmented, tiny, inefficient things we currently use, especially if we were to 3D interconnect everything to everything else.

  6. DropBear Silver badge

    Calling the glorified pattern matchers we have "AI" is doing nobody any good. Nor does expecting them to do anything involving any degree of sapience - which is essentially everything that isn't happening in a lab, factory or warehouse.

  7. muhfugen

    What again is this problem with the video of the Christchurch killings be streamed? When an air strike is shown on the nightly news, people dont complain. When their tax dollars pay for it, they don't do any more than complain, and certainly they do not stop paying taxes less the government ultimately come after them with lethal force. It just seems as if they're uncomfortable with being reminded that the state (and corporations) are ultimately helpless at preventing harm from coming to them in their own neighborhoods by their neighbors.

    1. Anonymous Coward
      Anonymous Coward

      @muhfugen - Violence and lethal force

      are an exclusive governments privilege like collecting taxes.

    2. Anonymous Coward
      Anonymous Coward

      "It just seems as if they're uncomfortable with being reminded that the state (and corporations) are ultimately helpless at preventing harm from coming to them in their own neighborhoods by their neighbors."

      Well, yes, it's bad to show how easy that looked to be for sure. And various laws won't make it harder, up until we reach a state (double meaning deliberate) where you wouldn't want to live.

      Pols always want to be seen "doing something about it" but the reality is...laws are broken all the time, we don't and don't want to live in a world where they are perfectly enforced - giving someone else that level of power is called totalitarian police state.

      Prohibition...

      War on (you name it) drugs...

      Hey, drunk driving is illegal, so is stealing and murder no matter what tools are used.

      Did any of those laws actually work out that well? If the populace goes along, they work and frankly aren't even needed. If not...well, last I checked I could get drunk and drive to the nearest drug dealer at any time of the day or night, perhaps picking up a package off someone's porch on the way home.

      That's reality. The only fix is to go back to teaching right from wrong and critical thinking again.

      A little self-discipline wouldn't hurt either. Usually taught by parenting and actual discipline, but thats illegal and enforced now - you get your kids taken away if you punish them.

    3. JLV Silver badge
      Mushroom

      Are you too dense to understand that the issue is that the perpetrator is the guy posting the material? And that widespread distribution is precisely what that nutjob wanted? In order to motivate more hatred and bloodshed.

      That said, FB is, imho, much more at fault for allowing this type of cockroach to congregate in forums on its platform, and in fact for positive feedback (in the control theory sense) via its recommendation algos. Than it is with failing to deal as well as could be desired with eradicating those horrendous uploads once they got disseminated by people aiming to game AI recognition weaknesses.

  8. I.Geller Bronze badge

    one has to teach AI on few bytes, using a regular dictionary's definitions instead of terabytes

    The wrong technology used! Instead of teaching AI on terabytes, one has to teach it on few bytes, using a regular dictionary's definitions. Only then will the AI really understand all texts, answers any questions and can reasonably act.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019