back to article Bring it on, Chipzilla! Nvidia swipes back at Intel in CPU-GPU AI performance brouhaha

The machine-learning-performance beef between Intel and Nvidia has stepped up a notch, with the GPU giant calling out Chipzilla for spreading misleading benchmark results. Intel is desperate to overtake Nvidia in the deep-learning stakes by claiming its 64-bit x86 chips are more capable than Nvidia's at neural-network number- …

  1. Blockchain commentard

    "their general-purpose Intel Xeon CPU" - er, for $50,000 I'd want more than a general purpose CPU. I'd expect at least some fricking lasers !!!!

    1. Anonymous Coward
      Anonymous Coward

      For that price, I'm guessing the case comes filled with a few kilos of white powder to stop the components working loose during transport.

      </sniff>

  2. iron Silver badge

    So how well does Intel's chip perform after you patch it for Spectre, Meltdown, Spectre-NG, LazyFP, BCBS, ret2spec , SpectreRSB, NetSpectre, Foreshadow, SPOILER, Zombieload, Fallout, RIDL, Microarchitectural Data Sampling, etc? Not so well I'm guessing.

    1. Anonymous Coward
      Anonymous Coward

      Even after the patching it'll still be way high in the efficiency stakes. I've heard talk of 486 DX speeds.

    2. Tomislav

      If the processor only ever runs your code you do not need any of those patches...

    3. LeoP

      Fair and square

      Far be it from me to defend intel. Far as in at least a few galxies.

      But in the name of fairness one has to make clear, that Nvidias GPUs have never undergone such scrutiny - and they would have fared rather poorly if they had: Just the NVENC part (which makes up a tiny proportion of the GPU) leaks the last image of every encoded stream to any Dick, Tom and Harry who create a new context.

  3. LordHighFixer

    Target audience?

    Who buys this stuff? I am happy to be able to run AI stuff on a CPU, at my price point anything over about $600 USD for a video card (the only place you will find GPU's in my house) is not happening.

    The fact that the lag make it looks like it is thinking is a bonus.

    1. eldakka

      Re: Target audience?

      "Who buys this stuff" is usually corporations for whom $600 would be a bottle of wine at an executive luncheon.

      The Googles, Amazons, Microsofts, NSA's, Defence, University super computer facilities, startups who have large VC backers, and so on.

      If you aren't one of them, and you want to play around with this stuff, you usually rent time from the cloud offerings of those big players.

  4. Gavin Jamie
    Headmaster

    Cancel your units!

    "The two-socket Xeon Platinum 9282 pair crunched through 10 images per second per Watt, while the V100 came in at 22 images per second per Watt, and the T4 is even more efficient at 71 images per second per Watt."

    As watts are joules/sec then it would be much simpler to say that Intel runs 10 images per joule, the V100 does 22 images per joule and T4 71 images per joule.

    Or even better each images takes 100mJ on Intel with Nvidia using 45mJ or even 14mJ on the T4.

  5. hammarbtyp

    Feels like the most appropriate test algorithm would be one which did deep analysis on the Intel and Nvidia claims and came out with a definitive answer. May take a while though

    1. Anonymous Coward
      Anonymous Coward

      I believe that is commonly referred to as a BS detector. Most BS detectors are currently occupied elsewhere - might have to wait a while.

      1. queynte

        "The two-socket Xeon Platinum 9282 pair crunched through 10 images per second per Watt, while the V100 came in at 22 images per second per Watt, and the T4 is even more efficient at 71 images per second per Watt."

        And then factor in unit costs. Situation seems pretty clear to me. Intel are further behind the deep-learning curve than they let on.. I have to agree with N that Intel are shooting themselves in the foot with their publicity there. People investing in that kind of architecture will fact check, so I guess Intel was trying to play a wider game by slyly trying to 'influence' non-deep-learning-techs into putting faith in Intel systems. I respect Nvidia for taking advantage here on the tech angle (bits / BERT / watts) in response to Intel's propaganda preying on lack of due diligence: a seriously bad move that undermine's their [Intel's] integrity to many involved no doubt, and smells of desperation.

    2. Ghostman
      Boffin

      And would they come up with the correct answer of 42?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like