back to article Checkmate: DeepMind's AlphaZero AI clobbered rival chess app on non-level playing, er, board

DeepMind claimed this month its latest AI system – AlphaZero – mastered chess and Shogi as well as Go to "superhuman levels" within a handful of hours. Sounds impressive, and to an extent it is. However, some things are too good to be completely true. Now experts are questioning AlphaZero's level of success. AlphaZero is …

  1. Mage Silver badge

    Google /Alphabet PR

    See Title.

    1. Charlie Clark Silver badge

      Re: Google /Alphabet PR

      Not just. The advances in self-teaching demonstrated by going from AlphaGo to AlphaZero are very impressive. As is the work done by Google on its own TPU chips.

      1. Anonymous Coward
        Anonymous Coward

        Re: Google /Alphabet PR

        The advances in self-teaching demonstrated by going from AlphaGo to AlphaZero are very impressive. As is the work done by Google on its own TPU chips.

        If they're that impressive, why did they have to have to rig the games? Is anyone planning to sell a block of shares soon?

        1. Charlie Clark Silver badge

          Re: Google /Alphabet PR

          why did they have to have to rig the games

          Despite what the article suggests the games weren't rigged at all and Google had no influence in them. In terms of hardware it wasn't a level playing field but the hardware advantages don't really explain the difference in the scores.

          I'm sure Google would be more than happy for a rematch with beefier opponents, though it might be worth noting that more hardware might not help the other side much.

          1. BarryUK

            Re: Google /Alphabet PR

            They kind of do - you would expect Stockfish running on 100 CPUs to beat Stockfish running on 1 CPU, which seems to be about the level of disparity here.Unless the computing power is the same you can't say whether Google's algorithm is superior.

            Given how amenable chess is (unlike Go) to the brute force style approach I would be surprised if the neural network AI could really produce a better engine.

          2. ma1010
            WTF?

            Re: Google /Alphabet PR

            @Charlie Clark

            Like HELL the hardware doesn't matter. Let's you and me have a motorcycle race. I get to ride my 1800 CC Gold Wing. You get a 50 CC moped. All else being equal, I will win quite easily.

            1. Charlie Clark Silver badge
              FAIL

              Re: Google /Alphabet PR

              @ma1010

              Sure, let's go but I get to choose the surface we ride on.

              Lots of computer problems do not scale linearly. Google did not deliberately set up a crippled opponent: the advantage of AlphaGo is mainly in the approach and the training.

          3. skepticdave

            Re: Google /Alphabet PR

            It would be interesting to see AlphaZero against a grandmaster under tournament conditions. There was nothing in its play to suggest that it could outplay a GM. And Stockfish (under its crippled conditions) made mistakes that a club player could have exploited.

          4. skepticdave

            Re: Google /Alphabet PR

            "I'm sure Google would be more than happy for a rematch with beefier opponents,"

            I beat the blind cripple in a boxing match - easily! I COULD have beaten Anthony Joshua, but I simply CHOSE to fight the blind cripple. But trust me, if my opponent had been Anthony Joshua, I would STILL have won. And you know this must be true, because my friend says so.

        2. Randy112235

          Re: Google /Alphabet PR

          They didn't rig the games, half the stuff in the article is FUD. The 2 TPU's AlphaZero ran on while playing Stockfish is not that much more powerful than the CPU Stockfish ran on, except AlphaZero need TPU(similar to GPU) and Stockfish needs a CPU.

          1. iron Silver badge

            Re: Google /Alphabet PR @Randy112235

            > The 2 TPU's AlphaZero ran on while playing Stockfish is not that much more powerful than the CPU Stockfish ran on

            You need to visit an optician, the article clearly states AZ ran on 64 TPU2s and 5,000 TPU1s. If 5,064 specialised chips are "not that much more powerful" than a single general purpose x86 CPU then clearly Google has invented a total lemon of a chip that should be consigned to the dustbin of history asap.

  2. Notas Badoff

    The lies come at no extra charge!

    So this sounds a lot like strategies used while benchmarking _our_ systems vs. _their_ systems. Whatever you could do to make your stuff look N times better, lies included.

    Yes, this is how good that model mainframe is. Look at our benchmark numbers! (Done on a 4 CPU installation, and we're selling you the 2 CPU installation...)

  3. Lysenko

    Let's be realistic...

    What is this stuff for? What is Google for? This isn't about playing chess or helping people search the internet, it's about advertising. Google is an ad delivery network that uses a search engine as bait and this AlphaZero thing is an ad delivery optimisation engine that just so happens to be able to play chess as a side effect.

    On that basis, it is perfectly understandable that Google refuses to publish all the test data. Sophistry, mendacity and psychological manipulation are the pillars on which the advertising industry stands. Criticising an ad firm for being economical with the truth is like criticising the sea because you can't drink it.

    1. DocJD

      Re: Let's be realistic...

      As I understand it, the company is now called Alphabet at the top level because they are, in fact, more than one company. Google is still the search engine/advertising part, but there are other companies split off from Google under the Alphabet umbrella. The fact that these other companies may not be making a profit right now doesn't mean they don't plan to at some time in the future.

      1. Jellied Eel Silver badge

        Re: Let's be tax efficient

        The fact that these other companies may not be making a profit right now doesn't mean they don't plan to at some time in the future.

        Now that's something an AI could help with, assuming the CFO/Treasurer lets them. So Alpha-AI is an R&D shop, so tax credits. If it generates a loss, no tax and possibly some relief. Then maybe once it's working, the AI servers are installed in Alphabet's mysterious barges, and off-shored. Then it can provide AI as a service to other Alphabet companies to make sure they're not profitable. Shifting revenues and costs around subsidiaries is far more profitable than shifting virtual game pieces..

  4. Rebel Science

    DeepMind is clueless about how to achieve AGI

    DeepMind has never made a breakthrough in AI and never will. They essentially apply well-known techniques invented by others (Monte Carlo search, deep learning and reinforcement learning) to games chosen for their limited number of behavioral options. I would be infinitely more impressed if they made a robot that could walk in any generic kitchen and fix a meal of scrambled eggs with bacon, toast and coffee.

    As an aside, Demis Hassabis and his team at DeepMind are on the record for suggesting that the human brain uses backpropagation for learning. They published a peer-reviewed paper on it. I cringe when I think about it.

    1. Dave 126 Silver badge

      Re: DeepMind is clueless about how to achieve AGI

      Google X, and later Alphabet, did have walking robots, but they sold off Boston Dynamics to Softbank Group - presumably because the most promising market sector was the military.

      Many of the other skills, such as image recognition and environment awareness, involved in the cooking task you outline are still being researched by Alphabet.

      1. Tomato Krill

        Re: DeepMind is clueless about how to achieve AGI

        Well they bought Boston Dynamics first so don't get a bunch of credit for that...

  5. John Savard

    Flawed, Perhaps, but Valuable Still

    Comments on the initial AlphaZero announcement fairly quickly took note of the large floating-point power used by AlphaZero, and the fact that Stockfish's hash tables were restricted to 1 GB.

    But that chess experts noted that AlphaZero's play included consideration of very subtle positional factors - something Stockfish does not excel at, but this is known to be a strength of the commercial chess engine Komodo - is also a fact.

    It may well be that if one tried using equal hardware power to play chess by techniques similar to those used by AlphaZero, the result wouldn't be much better than had been achieved by the Giraffe chess engine. That took 72 hours, rather than 4, to teach itself to play chess - and it only got to International Master level, significantly inferior to that of Stockfish.

    The thing is, though, it is still very significant to prove that something can be done at all, even if not necessarily in an efficient manner. Something can be a significant scientific advance in AI without being the most cost-effective way to make a strong chess engine.

    It may well be that AlphaZero's feat, by demonstrating the validity of the neural network and Monte Carlo search approaches, will allow technology from Giraffe to be incorporated into programs like Stockfish to make them better.

    1. sabroni Silver badge

      Re: it is still very significant to prove that something can be done at all

      Indeed. They've proved that the more computing power you throw at a problem the faster you can fix it. TBF we did know that already. The way this "experiment" was designed it's clear it's mainly an advert for Google.

      1. Charlie Clark Silver badge

        Re: it is still very significant to prove that something can be done at all

        They've proved that the more computing power you throw at a problem the faster you can fix it. TBF we did know that already.

        What we know is that this is rarely the case without doing work to improve the algorithms and parallelisation. Google has demonstrated that it has done this and also worked on improving the hardware by making it more suitable for the task at hand.

    2. MonkeyCee

      Re: Flawed, Perhaps, but Valuable Still

      "That took 72 hours, rather than 4, to teach itself to play chess "

      That 4 hours is meaningless.

      It took 4 hours on 64 TPU2s and 5000 TPU1s. The quoted researcher reckons that's about 2 years per TPU (didn't specify gen 1 or 2), so being conservative AlphaZero took the equivalent of 128 YEARS (over a million hours) to get to the level it's at. Or if the TPU1s count, over ten thousand years.

      1. Anonymous Coward
        Anonymous Coward

        Re: Flawed, Perhaps, but Valuable Still

        2 years Total if they used 1 TPU. 4 hours on 5064 TPU's, which equals to 2 years if you only had 1 TPU. I have no idea where you are getting 128 years or more from.

  6. Milton

    AI = Marketing = Lying

    I've bored the assembled commentardsphere more than once by pointing out that there is presently no such thing as "AI" and probably won't be for at least another decade—not the "artificial intelligence" that people meant when using the term for the last 50 years, before the marketurds got their slimy hands on it, anyway.

    But if even the Reg simply won't be bothered to call out this brazen misuse of the term, slapped onto anything that uses "machine learning" techniques, I guess there's little chance for the rest of the media, scientifically and technically illiterate as 97% of it is.

    Let's be clear, though, that the fibs, bias and propaganda associated with the various Alpha achievements are absolutely to be expected in this context. "AI" is being relentlessly hyped and exaggerated, the label is being misused, sometimes hilariously, machine learning tech is frequently being misapplied and wasted, and everyone who might have a dollar to spend is being told they've got to have it (usually via eyewateringly awful web ads). We saw this with "cloud", when the 1970s architecture of connecting remotely to powerful computing resources was resurrected as if it was a Wonderful New Thing; we've been seeing it with the Internet of Shyte, as every greedy idiot on the planet comes up with increasingly ludicrous reasons for connecting your toaster, dog, toothbrush, greenhouse and left lower molar to the internet, thereafter to be infected by malware and used for mining {Enter This Week's New Bit Currency Here} before it steals your identity, money, wife and aforementioned dog.

    If politicians and marketurds are the scourge of our age, it's because they have one thing above all else in common: lies, lies, constant lies.

    And poor dog.

    1. diodesign Silver badge

      Re: AI = Marketing = Lying

      "But if even the Reg simply won't be bothered to call out this brazen misuse of the term"

      Holy balls, we just published hundreds of words calling out DM's approach - and we're still the bad guys. We use "AI" as shorthand for various related technologies just as "the cloud" covers IaaS, PasS, SaaS, etc. The exact tech is defined, AI is used to avoid repeating the same phrase over and over. We're not a dry technical manual.

      You may have noticed we bounce between terms – IBM, Big Blue, Intel, Chipzilla, Microsoft, Redmond, crypto-currency, digi-dosh, etc – because it's more interesting to read, easier on the eye and mind, and still conveys overall the same message.

      Trust me, trust us, after decades of writing and publishing, combined as a team, an article with the same terms repeated over and over and over and over stops being engaging – and becomes bland documentation.

      C.

    2. Androgynous Cupboard Silver badge

      Re: AI = Marketing = Lying

      Techies are well aware that artificial intelligence is not intelligence, but it's still the blanket term that's used for this range of technology. Like the use of "hacker" for "cracker", your argument was lost a long time ago. See also "decimate", "tea" instead of infusion, the list goes on.

    3. Charlie Clark Silver badge

      Re: AI = Marketing = Lying

      Any form of technology sufficiently advanced can be considered magic

      Add to this any form of inference engine sufficiently advanced can be considered intelligence. Games may be an extemely limited domain but even so the computer has effectively taught itself how to play and beat the best. This may make it more of an idiot savant than an Einstein, but I'm reasonably happy to class this as a kind of intelligence, similar to any rules-plus stuff like claims handling that we currently employ people to do.

  7. AMBxx Silver badge
    Coat

    Did it come up with anything new?

    I'm crap at Chess, but do understand the notion that their are certain patterns of opening moves, mid-game, end-game etc. Did the AI come up with any new approaches?

    Just like to know.

    1. Anonymous Coward
      Anonymous Coward

      Re: Did it come up with anything new?

      Yes. It now sells more adverts.

    2. astrax

      Re: Did it come up with anything new?

      AlphaZero is causing a stir in the chess community. I'm a big fan of agadmator's Youtube chess channel and I watched an analysis of one of the games between AlphaZero and StockFish. The greatest surprise was the way AlphaZero willingly gave up a whole piece (a Knight) to keep up its own momentum and refuse to allow StockFish to develop its pieces. Note this behaviour is very human; usually a chess engine will sac(rifice) a piece for some tangible, strategic gain or to implement a tactic.

      This is the key difference here. The point to take away from this is that Google have not merely developed a more powerful chess engine that runs on more powerful hardware, rather they have created something that behaves much like an *extremely* strong human Grandmaster, not simply a super-powerful logic-monster. This will probably change the way elite chess players train for tournaments.

      Keep in mind that even with the hardware handicap, StockFish could analyse up to 70 million positions a second and play with an elo rating of 3300+. AlphaZero took just 4 hours to learn the game from scratch and beat a well honed engine like StockFish is pretty impressive.

      Linky: https://www.youtube.com/watch?v=NaMs2dBouoQ&t=379s

      1. Sil

        Re: Did it come up with anything new?

        I am no expert, but the imposed time management (1 min per move) does not seem adequate if the standard is no limit per move / nn minutes total max.

        I would be unfair to humans / computers following this standard.

        AlphaZero's victory is impressive, but if it really just is throwing a lot of processing power on well known techniques such as Montecarlo search & reinforcement technique, it becomes much less impressive. Any company with access to supercomputers could do the same.

    3. Dave 126 Silver badge

      Re: Did it come up with anything new?

      > I'm crap at Chess, but do understand the notion that their are certain patterns of opening moves, mid-game, end-game etc. Did the AI come up with any new approaches?

      I haven't looked deeply into this Chess, but Alpha Go certainly came up with moves and strategies that human grandmasters said they had never seen before.

    4. Anonymous Coward
      Anonymous Coward

      Re: Did it come up with anything new?

      For a different viewpoint, see this article: https://en.chessbase.com/post/the-future-is-here-alphazero-learns-chess

      These things are relative, and compared to grandmasters I too am a crap chess player. But I have spent a lot of time playing and studying the game, and I think The Register's article is far too dismissive. Chess players see AlphaZero as playing at a completely different level from any previous chess engines. Even though programs like Stockfish can easily make mincemeat of any human player - including the world champion - they still play in a distinctive, highly tactical style. There are still positions that completely baffle them, because all they really do is apply minimax to the deepest level they can.

      From the ChessBase article linked to above, it would seem that AlphaZero combines the strengths of previous chess engines with those of very strong human players. The games it played against Stockfish are very impressive, as it completely outthinks Stockfish in a very human - or, rather, superhuman way.

      Time will tell. On the one hand, if it's a genuine breakthrough, this could be one sign that AI is real. Remember, there was a lot of difference between the Wright Brothers' collection of bicycle parts and, say, a 747 - but the principles are the same and the time to go from one to the other not all that long.

  8. Anonymous Coward
    Anonymous Coward

    Bitter people are bitter...

    Some guy who didn’t build an AI algorithm to play chess doesn’t understand what Google were testing.

    For example, Stockfish won’t benefit from additional hardware. Stockfish being configured not using an opening book is to compare engine to engine strength (which is probably academically more interesting).

    I am going to guarantee that in a few weeks DeepMind will redo the tests, jumping through the hoops that the waste-of-space detracters invent.

    1. caesium

      Re: Bitter people are bitter...

      "Stockfish won’t benefit from additional hardware."

      Of course it would, especially given the small time limit. Deeper search translates to stronger play with alpha-beta all else being equal. It might not have mattered significantly if the games were using a standard time limit.

      "Stockfish being configured not using an opening book is to compare engine to engine strength"

      Which is incompatible with how this was advertised. If one of the ideas is to compare machine-learning with human tuning, than we need to use all of the relevant strategy's resources, not ban a certain type of human tuning (which Stockfish was designed to play with) and than declare victory...

      "jumping through the hoops that the waste-of-space detracters invent."

      No hoops here, just use standard tournament rules, vanilla SF configuration and publish _all_ of the games. Or else we'll conclude Google spiced up their (already interesting) work.

  9. Red Ted

    Painting oneself in the best possible light

    As with any research activity, it is to the benefit of the researchers to paint themselves in the best possible light. This then gets headlines for them and their funding body.

    Always the devil is in the detail.

    The same applies to the research that leads to "Cure for cancer found" headlines.

  10. Simon Rockman

    Chess and Go? It should be playing online poker.

    Maybe it already is...

  11. Missing Semicolon Silver badge
    Boffin

    Peer review test

    Given the current issues around peer review, it will be interesting to see if the valid criticisms in the article will be re-iterated by the reviewers and result in the paper being updated.

  12. Ugotta B. Kiddingme

    ... just until I need glasses?

    "playing against itself, a technique in reinforcement learning known as self-play."

    For some reason, I read that second word as "with"

  13. hellwig

    This is NOT AI

    The neural network took the positions on the board as input, and spat out a range of moves and chose the one with the highest chance of winning at every move. It learned this by self-play and using a Monte Carlo tree search algorithm to sort through the potential strategies.

    I f*cking KNEW it! I said this months ago when it beat GO. At some point, it could catalog enough moves to basically know where to go from any point. So it basically kept playing itself at each move, and had enough processing power to compute probably billions of moves in the one minute it allocated to each real move. The less possible moves remaining, the more likely it was to find a successful path through to nearly guarantee against a loss (it tied 72% of the time).

    In high-school I tried this approach with a Mancala game. I gave the game a few seconds to basically try each possible move and build up a tree of most-likely-to-succeed moves. It sucked because I didn't really understand proper Mancala strategy and my high-school coding skills sucked, but apparently I was just lacking the hardware. If only I was programming in a 32-bit instruction set at the time.

    1. Randy112235

      Re: This is NOT AI

      You can't brute force Chess let alone Go with the computing power we have now. The way you are describing is actually similar to what normal engines do now, Brute Force+pruning.

      1. Destroy All Monsters Silver badge

        Re: This is NOT AI

        The way to think about NN processing is: It's compression (throwing away "irrelevant" details):

        Totally readable with a large picture of the bottled dog of John Bull:

        https://www.quantamagazine.org/new-theory-cracks-open-the-black-box-of-deep-learning-20170921/

        Also check out the Youtube video.

        1. slaughterteddy

          Re: This is NOT AI

          Thanks for the link. Most interesting, given that in late adolescence rats (and, it is believed, humans) shed maybe 10% of their neurons. This might indicate a reason.

  14. Destroy All Monsters Silver badge
    Mushroom

    GOGGLE RAGE == GOOGAGE!!

    A spokesperson from DeepMind told The Register that it could not comment on any of the claims made since “the work is being submitted for peer review and unfortunately we cannot say any more at this time.”

    This not being legal proceedings or an IPO manoeuver, I call utter bullshit by the usual suspects.

    Of course Google is at liberty to discuss AlphaWhatsists. The peer reviewing researchers certainly wont mind and one hopes that they evaluate the paper on its merits, and reject it if they hear the show was rigged.

  15. Anonymous Coward
    Anonymous Coward

    Science or marketing?

    A key element of science is reproducible results, and expanding the wealth of knowledge for the benefit of mankind. If DeepMind aren't releasing any of their code, how is what they're doing 'science'?

    Sounds more like marketing to me.

    1. Bob Dole (tm)

      Re: Science or marketing?

      Ask the IPCC to release the underlying unmodified data they've used. oh wait...

    2. Anonymous Coward
      Anonymous Coward

      Re: Science or marketing?

      Now that's a great suggestion... in principle. Have you any idea how much of the "science" that is currently being done relies heavily on secret code?

      And that's research that deals with really important matters - not just playing a game as a first approach to developing AI techniques.

  16. geremore

    Check the games won by Alpha

    What the "standard"chess programs fail to see, are crippled pieces. Almost all games won, there was a material balance or even a plus for Stockfish, but that progam fails to notice a long inactivity of its pieces.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like