back to article Intel shows glimpse of 32-core Larrabee beast

Intel has opened up a corner of its kimono and shown a picture of the upcoming Larrabee chip, indicating it will be a 32-core graphics processing engine. As reported here and elsewhere, Larrabee is Intel's response to Nvidia and AMD graphics processing chips. Larrabee will be, in its first iteration, a 32-core processor. Each …

COMMENTS

This topic is closed for new posts.
  1. Matt
    Joke

    larrabee

    intel are obviously Reg stalkers!

    is that Sarah Bees prettier/ugly* (delete as aplicable) sister??

  2. Eddie Edwards
    Dead Vulture

    Ignorant report

    This report shows almost no prior knowledge about Larrabee, despite a SIGGRAPH paper and two GDC talks.

    "Each core is expected to be an x86 core"

    No, each core *is* an x86 core, plus a *wide* vector unit (16-way SIMD) with predication and scatter/gather load support. This is all in the SIGGRAPH paper you linked to.

    "Larrabee will have a shared pool of cache memory"

    Not quite. It has 256KB of dedicated L2 cache per core and cores can read each other's L2 caches. Each core has 32KB of dedicated L1. This is in ... the SIGGRAPH paper you linked to.

    "We wonder if Intel is using its Atom processor design as the Larrabee core to meet the chip real estate limitations"

    No, they are not, they are using the P54C processor design, as they have previously stated at their GDC talks.

    And after all that the only actual *news* is that Intel have released a die image ... so where is the link for that?

  3. edward wright
    Thumb Up

    No Atoms here

    Nope, Larrabee is based on the good ol' Pentium P54C with some nice mods:

    http://www.tomshardware.com/reviews/intel-larrabee-graphics,2253-4.html

    Still, take an old Pentium chip, add some nice 512 bit vector stuff, clock it at... who knows, but more than 233MHz... then 32+ of them in parallel... it'll be nice.

  4. Anonymous Coward
    Thumb Down

    @ Eddie Edwards

    That pedant course you did was really money well spent eh?

    All that bitterness, and you've still never kissed a girl.

  5. Eddie Edwards
    Boffin

    That die image in full

    http://www.pcper.com/image.php?aid=news&img=larrabee-big.jpg

    BTW: It looks shopped. I can tell from some of the pixels and from seeing quite a few shops in my time.

  6. Stu
    Alert

    Lets hope then that...

    ...intel have, this time, actually LOOKED at Nvidia and AMD (ATI?) boards and noted the performance, then we only need to hope they give a damn and make it just as fast, if not faster.

    A quote from the Intel GMA950 website -

    Responsive Graphics Performance With a powerful 400MHz core and DirectX* 9 3D hardware acceleration, Intel® GMA 950 graphics provides performance on par with mainstream graphics card solutions that would typically cost significantly more.

    LOL

  7. Matt Bryant Silver badge
    Unhappy

    RE: No Atoms here

    ".....then 32+ of them in parallel...." And there's my problem - in graphics that parallel bit can be done with a bit of coding, but it means I'll have 32 wheiner Pentium cores (well, not as wheiner as Sun Niagara cores) which won't be much good for running many of the current unparallelised and heavy-threaded Windows or Linux apps we run on Xeon. I'm still looking for a multi-socket server with at least quad-core Atom CPUs for a real low-power solution, pref in a blade, so I can carry on using my current apps. Graphics doesn't really play a large part in our needs (well, not officially, anyway....). So whilst Larrabee is a good demonstration of Intel getting really good at multi-core, it's not of much immediate interest.

  8. Chris Mellor

    Comment on comments

    Good comments. Here's a comment on the comments:

    1. The story says: "Each core is expected to be an x86 core, and each will will be paired with a vector processing unit." A comment says: " "Each core is expected to be an x86 core" No, each core *is* an x86 core, plus a *wide* vector unit (16-way SIMD) with predication and scatter/gather load support. This is all in the SIGGRAPH paper you linked to.

    Here's a quote from an Intel paper (Larrabee: A Many-Core x86 Architecture for Visual Computing): "Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit"

    Can't see the difference here; plus a -, paired with - augmented by - they all mean each x86 core gets a vector unit alongside it.

    2. A comment says: ""Larrabee will have a shared pool of cache memory" Not quite. It has 256KB of dedicated L2 cache per core and cores can read each other's L2 caches. Each core has 32KB of dedicated L1. This is in ... the SIGGRAPH paper you linked to."

    The Intel paper again: "A coherent on-die 2nd level cache allows efficient inter-processor communication and high-bandwidth local data

    access by CPU cores." We're in the same ball park again here surely? The cores share a cache memory resource.

    3. It's not an Atom CPU, as a comment points out: "We wonder if Intel is using its Atom processor design as the Larrabee core to meet the chip real estate limitations" No, they are not, they are using the P54C processor design, as they have previously stated at their GDC talks."

    Yes, granted, but I wanted to play with the Atom and molecule idea and you spoiled my little game :-( ... That'll learn me. I changed the text.

    4. A comment says "And after all that the only actual *news* is that Intel have released a die image ... so where is the link for that?"

    The only news is that Intel has released a die image. Er, not quite. As the intro says: "Intel has opened up a corner of its kimono and shown a picture of the upcoming Larrabee chip, indicating it will be a 32-core graphics processing engine." The 32 cores confirmation is newish. We also get to hear that the ship date is now the first half of 2010, and hear bit about Intel's software development efforts to help Larrabee. Since the Reg hadn't covered Larrabee since December last year adding the background info seemed reasonable.

    Yes, the die image reference should have been there, it got lost somehow, and is there now.

    I dunno if Larrabee is the right way to go, having a combined X86 standard app and graphics apps execution engine rather than separate multi-core X86 and GPU combo. It sounds sexy enough but will it be fast enough and will software development technology keep up with all its attributes?

    Chris.

  9. Anonymous Coward
    Joke

    Pentium P54C core? 32 of them?

    What is the name for 32-uple shit:

    http://bofh.ntk.net/Star-Trek-Lost.html

  10. Art

    Intel shows glimpse of 32-core Larrabee beast

    This is for this; http://www.khronos.org/developers/library/overview/opencl_overview.pdf

    Art

  11. Ken Hagan Gold badge

    @Matt

    "So whilst Larrabee is a good demonstration of Intel getting really good at multi-core, it's not of much immediate interest."

    You'd be an end-user then. This is of *immense* immediate interest to software developers. In the x86 world, it is the most interesting development since 32-bit.

    You note that it will run existing apps rather slowly. That's the point. This is the first x86 chip that runs existing software like a dog. (Yeah I remember the Pentium Pro's performance on 16-bit code. This is much worse.) To make Larrabee usable, people need to go back and rewrite that software. How likely is that?

    Well, first let me make the obvious point that boxes using this chip will be the first test systems that actually make non-parallel software look bad and parallel software look good. In terms of convincing non-programmers of the need to revisit old code, *both* of those points are important. You'll be able to show your manager, "Look, here's a cheap box with Intel's next-gen chip. The old code runs like molasses in winter but the rewritten code goes faster than the old code did on that super expensive Xeon box over there. So we've got a problem, but the solution could be a huge win."

    The last serious temptation to go back and fiddle with existing code was the 64-bit transition. Many folks didn't bother because there was only a marginal performance improvement and there were porting difficulties because the actual semantics of a given line of code might change. This time, Intel are pushing a toolset that (they claim) offers a significant performance boost (on Larrabee, but probably much less so on existing multi-core chips) with *no* semantic changes. That's a free lunch, so we'll treat the claim with some caution, but we do know that at *some* point in the next decade, software is going to experience a revolution. Smart programmers are therefore listening out for the first shot. Maybe this is it.

  12. Ken Hagan Gold badge

    @Matt

    "So whilst Larrabee is a good demonstration of Intel getting really good at multi-core, it's not of much immediate interest."

    You can bet is of interest to anyone who ties their software pricing to the number of cores.

    I think Intel are talking about several hundred watts for the 32-core offering, but the 8-core Larrabee would be a perfectly reasonable chip to put in an SCC. This could all go mainstream *awfully* fast. Canonical have an OS for such a chip. Microsoft don't, until they change their pricing structure. The same goes for Oracle.

    Make no mistake, Larrabee is awfully disruptive technology.

  13. Anonymous Coward
    Thumb Down

    even more nothingness then

    Goodie, I'll be able to watch 31 cores doing nothing now, rather than just one.

    Ah progress...

This topic is closed for new posts.

Other stories you might like