back to article Intel to tell all about roaring 96GB/s QuickPath interconnect

You horrible cynics out there looked at Intel's mushy Montvale chip and scoffed. "That's the end of the Itanic." Ah, but there's a fresh monster on the horizon known as Tukwila, and systems based on that puppy should fly if its brand new QuickPath interconnect arrives as expected. Next week Intel will disclose details on …

COMMENTS

This topic is closed for new posts.

30MB of cache...

means around 8MB for every core and 4MB for every thread. This is the minimal amount that is required for an itanic core to work at all. The architecture is fixed, so they can't upgrade it without forcing users to recompile everything. Increasing the clock speed and the cache size are the only options. For an x86, you can always play with the number and depth of pipelines. (the x86 has some reserve power in it, with a hardware based broader than loophole optimizer, you coud theoretically get 256 instructions for every clock and current cpus doing 4, while the itanium is limited to 4 instructions by its own specification and can't evolve)

0
0
Happy

Tukzilla surely?

Intel have missed a trick here. A new "monster on the horizon" should be called Tukzilla. I can already see thousands of hapless citizens running away from the amazing stop-motion composite film effect!

0
0
Boffin

32nm 100gbit mass products optical interface chip

I'd really like to see bus interconnects, especially to external cards and clustering systems go optical. I heard a rumor that IBM's group of 32nm partners are working up a very interesting chipset for this. Imagine no PCI 3.0, just a multi-strand mini optical cable interconnecting everything. Clustering off the shelf workstations together at native bus speeds..

Would work nicely for large arrays of GPGPUs too.

It would be cool if processors could interconnect to build a scalable VM monster (I just want it for games.. is that wrong?)

0
0
Anonymous Coward

96 gigabyte a second, is it really a big deal today?

as rthe title says ,96 gigaBYTE a second, is it really a big deal today?

afetr all we can buy off the shelf 100gigaBIT Ethernet over optical systems now.

and thats for far longer distance networking than any onchip pathway so an 8 times increase (8 bits to a byte remember)should be a walk in the park for chip makers.

shouldnt we be seeing terabit or even terabyte speeds from consumer onchip interconnects today?

0
0
Alien

@ AC

96GByte is, to all intents and purposes 1Tb..

I mean, I know 8 bits to a byte and all that, but, well, over heads and all that gumpf

innit?

oh say it is Dorothy, please say it is

0
0

damn @ac

must habe missed these ethernet specs *ggg*.. i thought singlemode fiber is maxed out at 2.4GBit..

i am getting old. anyone remember 10base2? *sigh*

*looking for the nostalgia-avatar...*

0
0
Anonymous Coward

@wolff

100 GbE was working in 2005, commercially available in 2006, OC most users are using the cheaper 40 GbE due in part to the cost of the per port switchs , in the rip off markets of high performance FPGA equiped routing and switching markets,not so much the ability to use 100 GbE through the fiber.....

the biggest letdown for world wide uptake of betetr than 1 gig Ethernet LANS etc,is infact the end user/SOHO 3rd party Ethernet providers to actually make good value routers and switches or better than 1 gig Ethernet cards actually available at reasonable end user price points....

a set of 10 GbE SOHO kit would be a very good thing to have today,but it costs quit a lot, so its not going to show any time soon in that market,unless someone gets a clue and brings the prices down.

even though its sat there on the shelf alongside the existing 100 GbE kit TODAY.

0
0
Linux

Intel's missing the point again

Opteron beat Itanium as the 64-bit mainstream processor of choice because it offered a smooth upgrade path from 32-bit processors and performed well. Intel finally caught up to it using the Xeon with EM64T. The Itanium is a workstation and high-end server chip, like the Power6, Niagara, and UltraSparc. It's not going to compete with Opteron. They themselves know Opteron and Xeon are competitors.

Intel is also touting its own quad-core performance over AMD's, when the highest-end AMD quad-cores cost about the same as Intel's higher-end dual cores. Go ahead and brag about being faster and more expensive. I'll just buy a chip that mops the floor with your dual core without spending any extra. I have a feeling that's what AMD has in mind, and plenty of customers will be happy to do it.

Now, Intel is back to bragging about the technical superiority of a CPU compared to something with which it doesn't directly compete. They can't attack AMD in the segment the Itanium actually occupies, because AMD doesn't field any processors in that segment. IBM and Sun do, but where are the comparisons to Power, UltraSparc, and Niagara? Are they omitted because the Itanium is not as impressive against those?

0
0
(Written by Reg staff)

Re: Intel's missing the point again

Not quite, mate. Quickpath is heading to Xeon right after Tukwila.

0
0
Stop

Itanium not limited to 4 instructions/cycle

@auser:

"itanium is limited to 4 instructions by its own specification and can't evolve"

All Itanium processors so far have executed two bundles of three instructions each per cycle ... so the current limit is 6/cycle. But the architecture doesn't limit the rate to two bundles ... in fact the whole point of the "stop" bits embedded in the instruction bundles is to allow some future implementation to execute code at more than 2 bundles per cycle without a re-compile.

0
0

Intel Terascale technology will allow to mix both IA64 and x86 cores...

..into the same package along with CPGPUs ones as well as cores for vector accelleration and core for FPU processing.

I expect to see Intel releasing various versions of Terascale based 32-core CPUs. Probably for the desktop segment there will still be mainly x86-64 core based ones but for the server market I expect them to push the IA64 into Terascale thanks to the architecture features allowing to mix various kinds of cores.

0
0
Silver badge
Happy

RE: Christopher E. Stith

"...They can't attack AMD in the segment the Itanium actually occupies, because AMD doesn't field any processors in that segment. IBM and Sun do, but where are the comparisons to Power, UltraSparc, and Niagara? Are they omitted because the Itanium is not as impressive against those?...."

Well, actually the whole UNIX segment is under attack from cheaper x86 kit eating upwards into their space. Applications that traditionally needed a hulking server can now be run on 4-way Xeons or Opterons. So the UNIX vendors need to defend against this by showing their enterprise servers are going to be cost-competitive against x86 kit, so Intel is showing how Itanium will be.

The article does mentions Tukzilla (thanks Mr Morley, I like that one!) as comparable to Power6. Please explain why you would need to compare it to UltraSPANKed, the SPARC chips are so far behind performance-wise the old PA-RISC chips caned them. Likewise Niagara, which is only good at multi-threaded apps using small threads such as webserving - it barfs doing serious work like Oracle, and the licensing costs would kill the idea before even the rubbish performance. In the true enterprise space, Tukzilla's only competition is going to be Power6.

0
1
Go

The 32nm 100GbE chip will be scalable like PCIe channels

A 4 pair RJ45 like optical plug will have 4 paths, each path doing 100gbit over optical, only 40gbit over copper.

Picture a completely changed computer architecture where your modularize memory, cpu(s), GPGPUs, etc and separte them from the motherboard. The motherboard has 1x, 4x, 8x, etc - 100GbE optical ports (on a new high density 8 fiber connector). computer components start coming in 5.25 modules (quarter, half high, full hight, double height, half/full length.

Want to add 6 GPGPU cards? No need for multiple PCIe 2.0 16x slots, you just add them on an optical bus.. Run out of optical ports and you get a off the shelf 100GbE switch module (made with the same inexpensive 32nm chips). Want to scale your system up, you buy another computer or host system and interconnect them with a 8x 100GbE interconnect.

If a group of vendors do come together and develop a low cost, mass produced 32nm 100GbE multipath chip that will someday come close approx 5x the cost of a 1GbE chipset, all this will be possible.

The problem is, Intel and other don't want this to happen. It means when you invest in your 2010 computer, it doesn't lose it's functionality in 2015, an enthusiast just keeps buying optically interconnected CPU / Memory / GPU modules and keeps plugging them in to create a vast resource pool that is created by Virtual Machine host software.

With the advent of mainstream 100GbE, and graphics subsytems interconnceted into a single switchable data backbone, you no longer need VGA/DVI/HDMI/Displayport - instead you'll have tiny display modules with optical ports that display any AV source in your network on ANY type of display.

People say optical will never make it into the home. All it takes is a low cost chip and mainstream adoption into medium range parts. USB3.0 will have an optical channel in the cable - there is little reason that in a few years complete product lines can interconnect this way too.

No one is going to bring us an optical PC evolution other than IBM +partners as far as I see it. Few others see the potential of a scaling 100GbE based technology brought down to the home market. Everyone else is thinking only enterprise high $ paying customers.

0
0

CPUs need low latency, not high bandwidth

Fast CPUs need their interconnect to provide the data they need, fast (they're probably stalled waiting for that data). While too-low bandwidth gets in the way, the critical factor is more often the time until the first datum you asked for arrives ("latency"). Prof Roger Needham used to say that "bandwidth can be made by man, but God makes latency".

Nobody ever quotes the minimum latency which is practically achievable with their interconnect, because as the interconnects get more amazing, it generally gets worse... So this bandwidth claim, like almost all other such, is irrelevant.

BTW, optical enthusiasts should probably try to convince us that four electric/optical transitions in their favourite path doesn't add delay.

0
0
This topic is closed for new posts.

Forums