back to article Intel wants to get into heavy petascaling

Intel may dominate the list of top supercomputers, but the most intriguing work done in the high performance computing field takes place outside of the Xeon kingdom. Pat Gelsinger, Intel's server chip chief, plans on fixing this problem. Look at November's Top 500 List, and you'll find 320 64-bit Xeon-based supercomputers. …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Boffin

    Lest we forget...

    Intel has dabbled successfully with HPC in the distant past. Aided by generous DARPA funding, Intel created the Paragon series of supercomputers, culminating in the first several teraflop systems ever constructed. The ASCI Option Red Supercomputer, delivered to Sandia National Laboratories circa 1996, was (if memory serves) a 1.8 Tflop system. It was one of the first, if not the first, to achieve this level of peak performance. That being said... Intel/SSD (Supercomputer Systems Division) was decommissioned shortly after ASCI Red's delivery, because Intel's upper management saw no future/profit in the technology. This perception was not due to short-sightedness or disinterest on the part of upper management. Our SSD corporate culture was so unique, and so far removed from the culture prevalent throughout the rest of Intel, that we were simply unable to convey our vision for the future. There weren't enough common points of reference. We didn't speak the same dialect. My point? I will be very impressed if Intel is able to "run with the big dogs" before spending at least a decade re-acquiring the appropriate personnel and re-evolving an appropriate micro-culture. It is good that Intel has learned tenacity from Itanium. They'll be needing to apply some of these "learnings" to HPC.

    - The Garret

  2. Anonymous John

    I read this as petscaling at first,

    and wondered why El Reg was running a story about animal dentistry

  3. Brian Miller

    Liar, liar, pants on fire!

    "Gelsiinger said that the work needed to write software for new architectures is often measured 'in many years - sometimes decades.'"

    Gelsinger, just because you have an entrenched feudal system of morons doesn't mean that the rest of the world is going to hang around for you to figure out your "strategy." Sorry, but IBM is a little bit ahead of you. Take a look at #441 on the November list, the SR11000-K2 by Hitachi. This uses 50 PowerPC chips. Only 50. How many of your Xeon chips does it take to get up to speed on the current list? Over 1,000. Bit of a difference, eh?

    Then take a look at Nvidia. Gee whiz, they have put their chipset into the Tesla system and put out the Cuda SDK. People are using it right now. Nobody is waiting for Intel to save the day. Years or decades for software engineers to catch up? No, I don't think so because the hardware and the tools are already out there.

    Intel's problem is that Intel isn't out there.

    Too bad, Gelsinger.

  4. Acme Fixer
    IT Angle

    Reality

    Leave it to the big dogs of supercomputing to start a pi$$ing contest. HPC is so far removed from reality it's like seeing a top fuel dragster on a city street.

  5. Les Matthew
    Thumb Up

    @Anonymous John

    I think Ashlee has a bad case of that quintessentially British affliction.

    "The bad pun"

    http://www.theregister.co.uk/2008/03/21/clearspeed_three/

    I rest my case. ;)

  6. Ashlee Vance (Written by Reg staff)

    Re: @Anonymous John

    Bad puns? I'm not familiar with these concepts.

This topic is closed for new posts.

Other stories you might like