back to article Intel says 48 core graphics is just over the horizon

Intel is releasing the Larrabee graphics chip for high-end PC gaming in late 2009 or 2010, but the company is already talking up the chip’s capabilities in a new paper. Depending on the model, Larrabee will feature between eight and 48 cores, each of which will have super-fast inter-communication and increase the chip's …

COMMENTS

This topic is closed for new posts.
  1. Adam Foxton
    Stop

    Vast numbers of cores

    Isn't that what the oh-so-successful Voodoo 5 did a few years back?

    Wasn't it slated for it?

    Well, lets hope for a 48-way GeForce-SLI system some time soon...

  2. Anonymous Coward
    Stop

    In other news..

    Nvidia and AMD state that they will have a 500 core in 2020, so nah nah nah naaaa nah....

    So in one or two years time Intel will have a new graphics card out...riigggghhhttt..

    Straw + Clutching.

    Is this because they currently have no real response to the AMD 780G?

  3. Anonymous Coward
    Anonymous Coward

    Don't mean to put you down...

    But the discussion of Larrabee and 780G have about as much in common as cheese and carrots....

    780G goes up against Intel's G3x and G4x chipsets, both of which do lag behind a bit - but only in the sense that one basic lemon is a bit more lemony that another basic lemon... no integrated graphics provide the power to properly game... which is what discrete cards do... Larrabee is Intel getting into a new market - that of discrete graphics, and would thus compete probably with the successor to 4850/4870 and Nvidia's 2x0 series.

    But then again, who am I to deprive you of meaningless Intel bashing, I mean... shame on Intel for producing good chips (Core 2) which are better than the competition, and double shame on them for attempting to do it in a new sector of the market...

  4. James O'Brien
    Coat

    Intel makes a gaming GPU??

    HAHAHAHAHAHAHAHAHA

    /mines the one with the GTX280 in the pocket

  5. Disco-Legend-Zeke
    Paris Hilton

    Can't make a Movie without it.

    all the expensive optical tricks of days passed are being done as bits and bytes. with rendering times not long ago of several seconds even minutes per frame.

    now we are approaching Real Time HD and SHD rendering.

    Add technologies like www.RED.com camera bodies and low price editing software and the tools once available only to hollywood are in the hands of everyman.

    Paris, cause she knows what just one hit video can do for your career.

  6. Webster Phreaky
    Jobs Horns

    Lets REMEMBER Intel's perchant for LYING about clock speeds and it's "Quad Core"

    Hey lets face it, Intel is the second biggest spec's liar in the tech industry after Apple. Two decades of lying and exaggerating and bold-face lying about actual clock speeds of Intel processors; Intel claims of the actual speeds of the Core Duo's and Core2 Duo have been proved as bullshit by independent testing labs; and the grandiose lie of Intel Core Math for the current "Quad Core" which is NOT a true Quad but two stacked Dual Cores, or Intel Math as 2 + 2 = Quad huh?

    Anyone should take anything Intel claims or their prophetic visions as fiction like the kooks on UFO radio.

    And like Apple, Intel loves to claim tech breakthroughs that others had pioneered before. Both Intel and Apple steal other companies ideas and are actually iNOvators.

  7. Anonymous Coward
    Thumb Up

    Larrabbee iss coooll

    Don't care what you haters think. This is coolcore. It's like the Cell on steroids, with a much nicer basic design. 16-way SIMD is going to be weird but having a texture unit is genius. Might be some rasterization HW in there too - not sure yet, have to wait for the paper.

  8. Anonymous Coward
    Thumb Up

    Some competition is good

    Yet, the big players; Nvidia and ATI/AMD offer an already bewildering array of cards for the gamer. These two companies are already always trying to outdo each other with varying results.

    The computer industry is a fast moving place, graphics cards especially. It's not uncommon for a top of the range card to have a shelf life of a year to 18 months before it gets superseded.

    For Intel to enter this war and make a profit, they should be aiming for compatibility with existing games (quite a tall order, as nether Nvidia or AMD cards are compatible with EVERY game). And they should be aiming for affordability. For them to make a dent, they should be offering a price/performance ratio considerably better than the competition.

    After all, one of the single most expensive parts of a gaming rig are the graphics card(s). Nobody is going to take a chance on Larrabee unless it offers much more bang for the buck and works with existing games. In addition, no developer will support it unless Intel offer excellent support for them.

    I think Intel could make a go of it, but they would have to be willing to make a loss in order to get their cards in PCs and a further one in support. Who knows how Nvidia and AMD will react to such a threat?

    interesting times indeed.

  9. cropchops
    Boffin

    Spam

    Can we have a report spam for comments like the one made @ 16:54... whats with these crappy website spam comments?

  10. Robert Heffernan
    Flame

    No Need!

    While a nice new ultra programmable multi-core GPU would be nice, has anyone else noticed the number of PC games coming out is dropping rapidly and everything now is coming out for consoles only?

    Flames because that's what I wanna do to consoles!

  11. Tom

    Forget that its a graphics card for a moment

    This is a set of 48 mini (in terms of instruction set) x86 cores running at very high speed. When intel first came up with this idea, it was supposed to be desktop chip, but then they decided that nothing would really use that level of parallelism in a regular application - except maybe graphics rendering.

    So what you have left is a 48 core general purpose chip that can run OpenGL and DirectX shaders and pipelines as fast (or in the same ballpark as..) single purpose silicon from ATI/Nvidia. Intel already have a massively dominant almost 50% share of the graphics market, solely on integrated graphics (which I think are great for non gamers from GMA 950 up), so they wouldn't enter discrete graphics unless they had something good.

    The idea I've heard mooted is that this will become Intel's integrated solution, with 8/16 cores integrated into the motherboard, and then 40/32 additional cores plugged in via PCI-E.

    The potentials for this are great because it is reasonably simple to program to use for your own purposes - much more so than CUDA could ever be, as it is in effect another x86 core - so any compute intensive task that can be parallelised could be optimised for Larrabee. Could be a very interesting thing for enterprise servers to have - couple of extra cores for your database?

  12. Anonymous Coward
    Paris Hilton

    Cor?

    Who gives a hoot how many cores? What is the user need? Does it meet it or exceed it? Simple.

    Boring geeks or what?

    Paris, because she doesn't care how many cores are in her, as long as she has a great experience. Cor blimey guv!

  13. E

    Naysayers

    Judge the product on it's merits not Intel's past mistakes.

    Even if it does not crack the high end GPU market I expect it will be successful in the HPC market. It looks pretty neat.

  14. Emo
    Boffin

    @ Some competition is good

    >I think Intel could make a go of it, but they would have to be >willing to make a loss in order to get their cards in PCs

    Here's me thinking it was a x86 CPU with graphic cores built in...

    >It's not uncommon for a top of the range card to have a shelf >life of a year to 18 months before it gets superseded.

    No mention of nVidia's 6 month graphic chip cycle then? 18 months before being suceeded? 6 months more like.

    If AMD can slip in the 800 cores from the 4870 on to Athlon/Phenom CPU then things could get interesting :)

  15. Henry Cobb
    Thumb Down

    x86 is a horrible model for a graphics chip

    They should have decided from the start to break from the x86 API for this kind of chip.

    What's needed here is a chip that is designed to flip between different types of parallel processing with minimal delay. Just a bunch of CPUs, VLIW, SIMD, pipeline processing and the ability to reserve subsets of the chip for different models at the same time.

    I predict that it will flop on the desktop and wind up in virtualized hosted linux rackmounts.

  16. Louis Savain

    48 Cores vs. 500 cores in AMD's FireStream 9250 and 240 cores in Nvidia's Tesla 10P

    Way to go, Intel. Intel needs to go to computer science rehab.

    Larrabee: Intel’s Hideous Heterogeneous Beast:

    http://rebelscience.blogspot.com/2008/08/larrabee-intels-hideous-heterogeneous.html

  17. Danny
    Paris Hilton

    hmm.

    Like most high level graphics parts and large scale multi processor hardware, the problem will be badwidth at the bottleneck - RAM. How much RAM will these INDEPENDANT cores need? They arent going to try and mate 48 cores through one data bus are they? And ask any supercomputer how much it costs to get scaled data paths.

    Paris, she can scale my data path (with up to date anti virus and firewall of course)

  18. Matt Bryant Silver badge
    Boffin

    RE: Danny

    Agree completely - why do we need so many speedy little cores when the bottlenecks are RAM and disk? I'd be much more interested in developments that included more on-die cache and wider bandwidths out to the motherboard. Dual-cores are quite fine for gaming, quads and eights seem excessive, so why a multi-core which will need a total change in the way games are coded?

    We've been here before when Intel and AMD had the old processor wars and they ramped up CPU clockspeed way beyond what the rest of the PC could supply, until AMD twigged that the real problem was the buses, did a better design, and stole a large chunk of Intel's market. This looks like more of the same - both big CPU vendors trying to out-bake each other with monster chips with embedded graphics which the average Joe just isn't going to need, and the one to sort out the peripherals like RAM bandwidth (embedded eight-contoller memory bus, please, Mr AMD?) will generate the real performance advantages.

  19. Dennis
    Coat

    Corrs

    An here was I thinking there were only ever 4 Corrs

    Mine's the one with the violin

  20. Tom

    @Danny, @Matt

    Why do you need so many speedy little cores?

    Well, because that is how 3D graphics are done. If you take for example the recent ATI Radeon 4870, it has 800 vertex shaders and 40 pixel shaders. Each of those is a custom bit of silicon acting as an independant processing unit - aka a cutdown core.

    This is different, as it uses general purpose cores to do the job of vertex/pixel shaders. This is NOT a CPU, this is in addition to your CPU, which will remain dual/quad core for the time being I'd imagine (very little benefit in 8 cores for most users).

  21. Joerg

    @Tom

    You are correct except for the fact that the CPU number of cores is going to increase pretty quickly as well. By Q2 2009 Intel should have released the 8-core Nehalem version, at least the Xeon server one and maybe a desktop Extreme as well. By Q4 2010 the first CPU derived from the Terascale project (which is a twin project of the Larrabee one) should be released and feature up to 32 cores. And then that number of cores is going to double and then double once again up to 128+ cores by 2012-2014 timeframe or sooner.

  22. P. Lee
    Paris Hilton

    Is it just me?

    or is massively parallel computing a niche arena, not to mention difficult and expensive to do?

    Isn't the point of specialised silicon that a chip with smaller ambitions is cheaper to build, faster (at the high-end) and more efficient than a general purpose chip?

    Isn't that's why we don't put quad-core intel's in mobile phones? Why would you take a general purpose chip, complicate it with 48 cores and then use it for a single purpose?

    I'm gonna need a new PSU (or 20) to get asteroids working on my new 48-core machine...

  23. Matt Bryant Silver badge
    Boffin

    RE: @Danny, @Matt

    All well and dandy, but one of the reasons we have so many high-clock, multi-pipeline graphics cards nowadays is the way that graphics is still done. Imagine a room with lots of boxes in it that we are asking our graphics engine to render a frame of. In the room, some of the objects are hiding the others from the player's viewpoint. With the current technology, EVERYTHING in the room gets computed and processed, and THEN it is calculated what can actually be seen by the viewer and the rest is discarded. I think it's called z-buffer rendering or something similar.

    This amounts to calculate everything in the scene as 3D objects, map in the triangle vertices, store the scene in fast memory, then - AFTER all that very intensive processing in all those parallel pipelines - decide which objects can actually be seen from the player's viewpoint and output this, throwing away all the rest of the computed data. Repeat for each frame. Of course, as we ask for more and more colours (now 32-bit = 64million!) and higher resolutions, all at high frame-rates, all that processing becomes hideously more intense yet we still end up displaying maybe only as little as 10% of what has been computed! It's like taking a maths problem of 1+1=? and instead of just doing the smart thing and one calculation, instead we calculate all the possible answers first and then decide which one is the right one afterwards. But it is simpler for the graphics vendors to implement than invent cleverer solutions, and allows nVidia and ATi to sell us very powerful graphics engines at crazy prices.

    If I remember correctly, the only real implementation that went against this process was PowerVR's Tile-Based Deferred Rendering, which calculated what the viewer could actually see in each frame, drew just that, and did it with much less processing and memory requirements than what 3dfx, nVidai and ATi were using. It was only because PowerVR didn't include Transform and Lighting processes which became the tech-du-jour of the majority of games that nVidia and ATi were able to out-perform the PowerVR cards.

    Instead of careering down the path of more and more unneccessary "cores"/pipelines and associated memory, can't we see a smarter approach along the lines of TBDR that will consume less power, cost less, and won't need a massive card taking up room in my PC case, or a CPU with embedded graphics needing a space-hogging socket on my mobo? It's interesting to note the more power-efficient PowerVR technology has become almost a standard in handsets and smartphones, even in (insert gasps of shock and horror) the fashion-victim's fave, the iBone!

  24. Anonymous Coward
    Black Helicopters

    end of encryption!

    @AC "Who gives a hoot how many cores? What is the user need? Does it meet it or exceed it? Simple."

    this family of chips will mean that all such desktop/laptop pc's will contain the processing array suitable for reverse engineering most forms of encryption.

    see coWPAtty and Pico FPGA arrays = expensive current SOTA systems for 'lanman' rainbow tables. DES , triple DES, GSM A5, WPA/WPA2, Gone!!!!

    Larrabee are GPGPU's not just GPU's and it almost means the end of cryptographic secrecy - especially for resource constrained devices.

    Survey: do you use one of the most common 1000 WiFi SSID's?, do you use a pre-shared WPA pass phrase of around 20 characters or less?

    survey answer: Yes!, you're on the 33GB reverse lookup table!

    (available here http://umbra.shmoo.com:6969/, 24 seeds)

    now just extrapolate this to nearly all current RCD crypto. soon stuffed!?

This topic is closed for new posts.

Other stories you might like