back to article AI shoves all in: DeepStack, Libratus poker bots battle Texas Hold 'em pros heads up

DeepStack is the first AI computer programme to beat professional poker players in a game of hands-on no-limit Texas hold’em, a team of researchers claim in a research paper out this week. The use of games to train and test AI is prolific. Surpassing human-level performance in a game is considered an impressive feat, and a …

  1. Anonymous Coward
    Anonymous Coward

    The use of games to train and test AI is prolific

    It sure is, but is that specific AI able to do anything else?

    1. Destroy All Monsters Silver badge

      Re: The use of games to train and test AI is prolific

      Of course not.

      It solves a particular toy problem in the "domain of AI".

      Not generally, but only by the fact that engineers looked at the problem, decided on a way to approach it using heavy computer gear, and dropped a neural network in there for feature extraction.

      So the intelligence is in the design team.

      1. Chris Miller

        Re: The use of games to train and test AI is prolific

        The rules of Poker really aren't that difficult - working out the odds of drawing to an inside straight does not require advanced mathematics. As I understand it, the skill of pro players lies in their ability to 'read' the others at the table, but this would obviously be difficult facing a computer screen.

        So, if the AI is equipped with a video camera and makes deductions such as "Tex rubs his ear when he's bluffing", I'm impressed. Otherwise, not so much.

        1. MonkeyCee

          Re: The use of games to train and test AI is prolific

          Good poker players can still read you through a screen. Most of the actual play is interpreting what bids the other player(s) are making, and thus what they are claiming to have.

          So you can make pretty good play by making a bot "tight and aggressive" with fairly simple mathematics. You can teach someone who knows a bit about probabilities, and paying attention to how quickly a player makes a decision can also give quite a lot away. So less "Ted rubs his ear" but more "Ted tokk less than a second to decide, Ted is probably not bluffing".

          Teaching a decent Magic player how to beat fish takes a few hours. Anyone who can handle high school stats should be able to after maybe 6-8 hours practice. It's much less about what odds does hand x have, and much more about how much of the pot is your money, and thus what your expected profit is if you call/raise.

          You can memorise the 20-25 hold'em hands that you should play, and fold the rest. That alone will allow you to beat most face-to-face players.

          As always with AI research, it's only really useful if it's a general rather than specific AI. Which is the point why Deepmind is actual AI, wheras Deep Blue was a specific problem solver.

        2. Anonymous Coward
          Anonymous Coward

          Re: The use of games to train and test AI is prolific

          Whilst I'd agree that "The rules of Poker really aren't that difficult" Texas hold'em is one of the few card games where you don't need to hold the best hand to win; you just need to convince your opponents that you hold a better hand than they have and although this is largely done by the way that you bet it's not the only factor when you're playing against other human players who, simply by the virtue of being human, are influenced at a subconscious level by body-language (even whilst being aware of, and employing that factor as part of their own game play).

          However, if the inputs to the AI don't include that factor then it will put a human player at a disadvantage because that factor is one of the most important aspects of human game play.

          What make me question the value of what they've achieved so far though is that they're only really handling the end-game i.e. the one vs. one heads-up phase, where skill and tactics become less relevant and the randomness of the cards you've actually been dealt become the greatest factor.

  2. frank ly

    A learning experience

    "... average win rate of more than 450 mbb/g (milli big blinds per game) ..."

    I've looked in some forums etc to learn what this means and I've downloaded a paper on the subject from the University of Alberta Department of Computing Science. I might figure it out one day.

    Maybe if I started playing poker for money.....?

  3. Kaltern

    I think the developers of Deepstack kinda missed the point of Poker.

    Poker is as much about reading your opponents as it is mathematically calculating the outs you have on a particular hand. Did Deepstack successfully pull a check/raise, or force a player into a fold by misrepresenting it's hand? I highly doubt it.

    Poker is not a game of 100% skill - it simply cannot be so, as you don't have enough information about each hand to confidently know you're going to win or lose. Of course, people would argue that the point of exercises such as these, is to prove an AI can take all possible bit of information into consideration, and calculate the missing variables, and apply them to a real world, split decision situation.

    But then they should be doing that, in a scenario that matters. Say.. oh I dunno, driving a car?

    1. Craig 2

      "Poker is as much about reading your opponents as it is mathematically calculating the outs you have on a particular hand."

      It seems not, as it comprehensively beat it's opponents.

      1. Anonymous Coward
        Anonymous Coward

        It's over. Better fold.

      2. Eddy Ito

        The AI also doesn't have any attachment when it has to put up the blind. Many human players are likely to feel that it's their money in the pot and there is often a feeling they must protect it by playing a hand they really shouldn't which often leads only to a greater loss.

  4. Nolveys
    Holmes

    Implementation Challenges

    I wonder how they went about simulating the virtual pork chop for the AI to comb its virtual hair with.

  5. anonymous boring coward Silver badge

    I din't gather from the article if the players tried to adapt their tactic to specifically beat the machine. There may be flaws (a.k.a "leaks") in the way the machine plays that haven't been discovered yet because they are very different to typical human player flaws.

    A human player who knows really well how the machine works may be able to use that against it.

    Anyway, when poker is played for money, machines will not be legally allowed to participate.

  6. Anonymous Coward
    Anonymous Coward

    It's not an AI till it can go, "you know, I don't like 'insert whatever it's been made to do' I'm going to go to art school and become a hippie"

  7. Martin Gregorie

    Isn't the real test of an AI poker player...

    How long will it take or how many games must it play to cover the cost of its hardware?

  8. Dr Stephen Jones

    Biting The Hand That Feeds AI?

    "Although DeepStack’s opponents aren’t the best poker players, the result is still impressive"

    As other commentards have pointed out, poker is a lot less complicated than Chess, and a computer can beat a human at Chess. If you make the goalposts wide enough, you can't fail to score.

    This is more PR masquerading as "news". Where do I need to go to find someone Biting The Hand That Feeds AI?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like