back to article Intel Larrabee letdown leaves HPC to Nvidia's Fermi

Intel has never been particularly precise about what its "Larrabee" graphics chips were, so it is difficult to be sure how disappointed we should all be. And considering the company's track record outside of the x86 and x64 chip racket - its failed networking business and Itanium are but two examples of its woes - it's hard to …

COMMENTS

This topic is closed for new posts.
  1. toughluck
    WTF?

    Oh sir, you kil me!

    > The delayed entry of Intel's Larrabee and the dead-ending of IBM's Cell

    > (at least on blade servers) gives AMD's Firestream GPUs a better chance

    > against Nvidia's technically impressive Fermi family of Tesla 20 GPUs.

    Technically, they're not impressive, they don't exist (fake cards don't count and 7 chips do not really make volume production).

    As they don't exist, you can't really bench them.

    http://sisoftware.co.uk/index.html?dir=qa&location=cpu_vs_gpu_proc&langx=en&a=

    5870 was already benched by SiSoft to be 8.8 times faster in double-precision FP than 260 GTX was. Assuming Fermi is 8 times faster than 260 GTX, it is barely going to be on par with the 260 GTX (we can assume it will be 4-5 times faster than previous generation).

    Given that Fermi is going to be a huge part, it is going to have power issues as well, likely drawing more than Tesla, which already draws 10 times more than ATI and 5870 is rather frugal. Needless to say, this isn't going to earn them any top spots in Green 500.

    You need error correction? Run two 5870 cards beside each other (or one 5970) and compare the results. It's still going to be cheaper than Fermi.

    > The Fermi chips will be available as graphics cards in the first quarter

    > of next year and will be ready as co-processors and complete server

    > appliances from Nvidia in the second quarter.

    Oh, really? With the slips they suffered for the last year they'll be glad if they are able to put *anything* on the market before they run out of assets. Nvidia has nothing to compete with ATI in the GPU market, Fermi is a huge die and is going to be too expensive to interest gamers if they can get two Radeons for the price of one GeForce (unless Nvidia decides to shoot themselves in the foot and sell below their margins).

    > And they will likely get dominant market share, too, particularly among

    > supercomputer customers who want to have error correction on the GPUs

    > - a feature that AMD's Firestream GPUs currently lack.

    Assuming that they can actually put anything on the market. While adding error correction is not a simple matter, I think AMD can do that within a reasonable time frame and with Nvidia lagging behind, it would be foolish to think AMD does not have anything on their roadmaps.

    1. Ryan 7
      FAIL

      O_o

      "You need error correction? Run two 5870 cards beside each other (or one 5970) and compare the results."

      That's not error correction. That's redundancy & *unreliable* error checking. ECC is a parity checksum built in to the memory module, which has the effect making them 9-bits-to-the-byte.

      With the sheer amount of data being pushed through HPC solutions, the cost of the hardware pales in comparison to world-class accuracy. Not a Radeon fanboy's homebrew "lets just have two then" hot air.

      1. Hungry Sean
        Boffin

        well said. . .

        Also need to include that "running two side by side" requires all sorts of overhead to support checkpointing, comparison, and roll-back, and is generally non-trivial. Even detecting the error is exceedingly expensive because you need to compare all of the memory in your program at each point. And what happens if memory corruption prevents roll-back? Point being there's a lot of very hard problems in achieving reliable computation through redundancy.

        There was once a company called Tandem that was able to charge lots of cash precisely because they were able to do this right. The finance industry appreciated their efforts. I don't know that AMD has that same level of experience, or frankly, the motivation to roll it out for niche science applications that may not have the same level of financial backing.

  2. The Unexpected Bill

    Failed networking business?

    I'm curious to know what "failed networking business" that the author of this article is talking about. Have they noticed that Intel networking adapters--both integrated and otherwise--are very common and popular?

    Intel does do one thing right with their integrated graphics processors. While they are not the highest performance solutions out there, they also do not consume the power needed to run a small country. Nor do they have the tendency to expire from heat like both ATI and nVidia products do! I've never lost an Intel integrated graphics adapter and wish I could say the same thing for ATI and nVidia...

    1. Anonymous Coward
      FAIL

      Re: Failed networking business?

      Oh, I used to work for a (Danish) networking company that was acquired by Intel. They bought three of those back in the good days around year 2K - two made switches/routers/hubs/NICs ("devices"), one made ICs ("components") especially for telecom networks. Most were closed in 2001, the last bits a few years ago - total loss for Intel probably in the billion $ range (based on acquisition costs).

      To me, buying into a business and then closing everything down a few years later can justifiably be regarded as a fail.

      BTW, Cisco has done the same a few times here in DK too. :-)

  3. Anonymous Coward
    Flame

    Re: Failed networking business

    El Reg is referring to the IXP line. While technically excellent it has basically been repeatedly mismanaged along with XScale.

    To put a long story short at the beginning of this decade Intel had the "excellent" idea of moving development to one particular country where it is supposed to be cheap and brains-a-plenty. What they did not realise, that they also get a bundle of corrupt middle management that at least fiddles expenses and more often does various versions of the Satyam Gambit to syphon money into their own pockets. In order to be able to do it, said management consistently reported positive results even when things were going totally apeshit. We all remember those days - the ever increasing thermal envelopes and P4 being able to push barely 30% of P3 speeds on filesystem and networking performance and Athlon beating it into a pulp on all benchmarks.

    To Intel's credit, they actually noticed where things were going ant put a stop to this. They were saved by a skunkworks projects from Intel Israel whose descendants are now known as Core (not sure if Atom is from there, would not be surprised if it is as well). However, after closing their development in that supposedly cheap country, firing and putting the local execs on trial for fraud they basically did not have resources to develop a portion of their portfolio. As a result IXP, XScale and quite a few other projects were either folded or spun off. By that time they have already seen a few years of Satyam style development and were clearly beyond salvation. So is the Itanic - it also had its share of Satyam style development.

  4. Anonymous Coward
    Anonymous Coward

    The problem witih Intel

    Is that is they make something that doesn't quite live up to the markets expectation, intead of improving it, they try and force the market to change to accept sub-optimal items. So even if the Intel graphics stuff isn't that good, they won't care - they'll just sell it anyway, and assume/force the market will use it. Apparently the meetings where they decide to do this are really quite interesting, but not from a technical point of view!

  5. MinionZero
    Boffin

    @it is difficult to be sure how disappointed we should all be...

    I find it very interesting that (the last time I heard the figures) NVidia's share price went up by 13% on the news of the canning of Larrabee. (I think AMD was also up about 8%). It makes me wonder how much Larrabee was just another experimental Intel chip that was being used as a PR vaporware marketing tool for Intel to use against their competitors. (Intel have other experimental chips, like that 80 core chip a few years ago, but Larrabee was pitched as Intel's answer to NVidia and ATI/AMD products. Larrabee was making some customers hold back waiting, as they were thinking about using Larrabee instead of Intel's competitors. That was hurting Intel's competitors.

    Also Intel's timing is very interesting as they just happened to mention their new experimental 48 core Intel chip about 2 days before they canned Larrabee. Problem is that 48 core vaporware chip is at least a few years from market, and so by then, NVidia and AMD will most likely have at least 2 more generations of their GPUs released by that time.

    Intel is falling way behind in both graphics and high power computing markets, so the canning of Larrabee is big news.

    Intel's whole marketing plan is to keep pitching x86 compatibility as centrally important, and so they are trapped into this marketing course. They cannot finally admit x86 chips are becoming very bloated with decades of legacy design ideas all crammed in the same chip. That bloat takes up a lot of extra space preventing many cores being placed in the same chip, yet it gets worse still as its also using up a lot of extra power just to power up so much extra legacy circuitry. So Intel's dependency on pushing x86 is providing competitors with a way to fight back against Intel. So its not just Intel's ability to compete in high power computing that is suffering. They are also loosing out to the low power requirements part of the market, like the ever growing mobile computing market. For example, ARM cpu's use far less power than Intel chips, because ARM cpu's have always been clean and efficient cpu designs. But also, even the graphics cards also need very efficient cpu designs, so they can cram many cores into the same chip. So all of Intel's competitors are all pushing very efficient cpu designs. Meanwhile Intel are forced to keep dragging along their not yet dead horse, but certainly they are dragging along an old arthritic, three legged, asthmatic, sad, short sighted horse. Its time Intel retired their cpu, its done its job, now let it rest, while newer horses can race ahead.

    We need Intel and Microsoft to move to a new CPU. Sure at the first step of the move, it won't be the most efficient software design in history, but then Microsoft has never been even close to efficient in its software design. Sooner or later Intel and Microsoft will have to move, otherwise their competitors keep looking ever better.

    Both Intel and Microsoft have the PC market mostly to themselves, but they are trapped in that market and trapped with x86. Graphics, high power computing, and mobile computing are all growing markets that Intel and Microsoft are struggling to get into. Their old bloated designs are a ball and chain holding them back.

    So the canning (and therefore failure) of Larrabee is very big news.

    Plus this situation is only going to get worse over time. Thats because back when the race was for one high powered CPU, then Intel mostly won. But now the game is moving towards ever more cores (where each core has to be very efficient to allow it to work within the power requirements to get many cores into the same chip). That efficient CPU design also benefits the single very low powered mobile market. Intel is going to continue to suffer due to their bloated design, until they move away from that design.

    So Intel is heading into trouble and this time, the growing CPU markets are not going to wait a few more years for Intel's next vaporware 48 core chip. Intel are in trouble.

  6. Anonymous Coward
    Anonymous Coward

    Only CPUs?

    I don't quite agree that Intel can't do anything but x86 (and x64) CPUs.

    They have also pushed some interesting buses. You may have heard of PCI (with PCI-X and PCI-express) and USB? (They were also involved in Bluetooth, but not to quite as large scale.) And as mentioned by The Unexpected Bill they have quite a lot of ethernet controllers under their name.

    1. frankg778

      Re: Only CPUs?

      It's not that Intel does not know how to design well. It's that their competitive advantage over other design houses like NvDIA comes from being the titular leader of x86 architecture. How many years have we heard about the WinTel monopoly. What does this term mean? It's collective market inertia. Everyone's afraid to try some new architecture because they could make a big investment and choose wrong. But this marketing commitment on Intel's part comes at a cost they really need to abandon x86 complexity to make massive multi-core feasible in a world where power consumption matters!

  7. Matt Bryant Silver badge
    Happy

    RE: well said. . . & TPM's reflex actions

    As regards Larrabee, I'm betting it was power draw that killed it. Intel seem to have started many projects in the last ten years with power as the lowest consideration, the idea seeming to be to get the chip working and then ramp down the power draw (usually by die-shrinks). Even low-power designs like Atom had the problem of a relatively power-hungry mobo at launch. I'm guessing they got the job done technically but then found they had left themselves an insumountable problem when it came to ramping down the power draw, to the point where a complete redesign was needed. Redesigns are expensive, and just adding in error-checking would probably cost ATi/AMD a bunch they probably don't have, so a software solution and doubling-up on the hardware sounds a likely and workable option which would still get them to market ahead of nVidia.

    "....There was once a company called Tandem that was able to charge lots of cash precisely because they were able to do this right...." Tandem was set up by some refugee hp engineers that were (as the story was told to me) not happy with the lack of priority given to their ideas inside hp. Tandem was bought by Compaq, which was later bought by hp, so in the end hp got the tech anyway. They now call it hp NonStop and it is still being sold to financial houses, with the latest version being on that Itanium stuff that TPM seems to have some bad-mouthing reflex over.

    Just for TPM, as it looks like his Big Blue FUD is out of date - Itanium has had full compatibility with x86 since version one by either hardware (an embedded Pentium chip in early versions!) or software emulation, and since hp-ux 11i v3 there is one binary for both PA-RISC and Itanium. Intel's design didn't get messed up by hp coming along and inserting PA-RISC code, hp were the origial designers of Itanium to replace PA-RISC (hence the PA-RISC code - duh!), and then went to Intel as a fabrication and development partner. Come on, TPM, even Wikipeadia has more up-to-date info on Itanium than your article.

    And you did miss on one interesting wild-card idea - Oracle. Larry now has the Niagara technology, which - whilst pants for real enterprise UNIX work - is still a viable, massively-parallel and multi-core design to develop for HPC. If Larry could get close to nVidia (and they have no overlaps as far as I can see), we might see a Niagara hybrid with some space on the die given over to a programmable GPU. Stranger things have happened!

  8. Joe User
    Flame

    Never again

    After the i740 fiasco in the late 1990's, I swore that I would NEVER again buy a discrete graphics card from Intel. The only reason that I have any Intel video products at all is because they're built into the chipset. Given a choice, I'd much rather have an ATI or nVidia GPU on board, even if it's a low-end part. They still perform much better than Intel's GMA parts.

This topic is closed for new posts.

Other stories you might like