Feeds

back to article Memory muddle muddies Intel's Exascale ambitions

Intel's lofty attempt to make a supercomputer capable of an exaflop by 2020 while consuming a mere 20 megawatts of power is running into major problems due to the pesky laws of physics. When Intel announced its exaflop goal back in 2011, the chipmaking giant talked up a variety of technologies it was bringing to bear on the …

COMMENTS

This topic is closed for new posts.
Anonymous Coward

You're holding it wrong.

>To this hack's mind, Intel's big problem is that as it runs to meet its goal, it is perpetually being thwacked in the face by the fundamental laws of physics which rear up in the materials it is using or ways it want to shuttle information.

Nope. The problems and limits are all design problems and limits. Despite what the salesmen would have us believe, there's immense room for improvement and we're aeons from approaching the limits of physics.

1
1
Silver badge
Terminator

Re: You're holding it wrong.

The time to the singularity is not aeons, but a few Friedman units.

Barring a nucular war or two, of course.

0
0
Bronze badge

One alternative would be a constrained programming model that could allow for simpler cores with higher frequencies

Hasn't Intel heard about RISC?

2
0
Anonymous Coward

Exactly what I thought reading that line: Sounds like they're on the verge of inventing some whole new sort of computing using a reduced instruction set! I wonder what they'll call it.

;-)

2
0
Silver badge

I suppose he thinks more along the lines of MISC

Time to break my INMOS T404 out of its lucite cube?

0
0
Bronze badge

bit fiddlers

A very funny way of referring to these massive, state of the art machines!

0
0
Anonymous Coward

Where are the bottlenecks?

Allegedly Mr Intel says "a constrained programming model that could allow for simpler cores with higher frequencies" just after he says "If I suddenly gave you a terahertz processor and the same memory system you wouldn't get dramatic speedups."

I can't see how those two connect.

If your bottlenck is memory system performance (ie if the CPU has to wait for the memory system), it doesn't matter what's in the CPU core (RISC or CISC), because the memory system is the bottleneck. Nor does it matter whether the programming model is massively multithreaded or some other variant. The memory system is the bottleneck. Intel aren't in the memory business any more, are they? So they're blaming their suppliers?

In fact in certain cases, where the bottleneck is the bandwidth of the instruction fetches (it does happen), you could argue that reverting to CISC might be worth a look, because with CISC you get more work done per instruction fetch, although predicated instructions with RISC already help this to a limited extent.

Not expecting CISC (e.g. VAX) to re-emerge in 64bit guise just yet though.

Maybe Intel could just put lots more cache on the CPU, like they already have to do with Itanic to get anything approaching reasonable performance out of it?

But then if it's an x86 instruction set in a multicore chip you've got all the fun of managing cache coherency across a multisocket system (it's hard enough with mutliple cores in a single socket).

Still, it must be doable in some way, there are 64way and higher x86 SMP systems around. Whether anyone actually uses them, I don't know, but HPQ allegedly have them.

Oh Intel, what a mess you've got yourself into with this x86 addiction, and your unsuccessful attempts to kick the habit.

3
1

I don't know why this is a problem..

Furber and I announced an ARM-based vector processor for neuroscience applications with an equivalent power-performance ratio in Lisbon in June, and no need to invent new technologies, just 28nm, mobile DRAM, and 3D packaging.

... Of course, if you insist on something as power-hungry as x86, you'll need to be a bit more inventive, ...

1
1
Silver badge

Re: I don't know why this is a problem..

Exascale does not just mean "ARM based vector processor".... for specialized applications.

You will need to defend the Spinnaker somewhat more. Or why not tell Intel that their XScale already puts them where they want to be?

0
0
Anonymous Coward

Re: I don't know why this is a problem..

DAM, they probably realise "their XScale [would have put] them where they want to be" ...but they don't have an XScale. They dumped it in 2006. oops! Hence the need to rediscover it. Still got that ARM licence though...

1
0

Re: I don't know why this is a problem..

"Exascale does not just mean "ARM based vector processor".... for specialized applications."

Talking with NVidea executives in the back of a taxi in Lausanne: "Exascale is just a marketing term; it means whatever we want it to mean!"

Still, to be serious for a moment, the major energy consumption is going to lie in the interconnect. Making it fully general purpose and scalable will be extremely expensive. As most supercomputers are made of stock Intel components, it might be useful to consider custom interconnect in order to drive energy costs down. The different supercomputer customers have wildly different requirements. Google, for example, the largest user of supercomputers on the planet has no (or very little) need of floating point.

What I expect to happen with next generation supercomputers is:

(*) Vectorization (drives down fetch-execute costs)

(*) DRAM stacked 3D (reduces memory access energy costs by factor of 5 at 28nm)

(*) As many cores as you can put in. (Steve's on record saying that energy efficiency dictates that these cores need to be as small and simple as possible; my only comment as the programmer is I'd like one heavy-duty core to handle IO)

1
0
Silver badge

Re: I don't know why this is a problem..

Yes!

Plus -> cores as near the memory as possible. Back to the active memories of the early 90s, please!

On a tangent, the IEEE Computer issue of August 2013 has a focus on "Next Generation Memory". As IEEE still cannot be arsed to provide open access to the hallowed Intellectual Property (steady revenue stream FTW), one has to go down to the Uni Library:

The Nonvolatile Memory Transformation of Client Storage

Refactor, Reduce, Recycle: Restructuring the I/O Stack for the Future of Storage

How Persistent Memory Will Change Software Systems. There is also a YouTube video on this, but I can't watch it because "An error occurred; please try again later"

And Intel sold XScale? Oh well. More x86 then. Based on the marketing section's idea that one can "leverage" existing x86 software for completely new infrastructure, I suppose.

1
0
Anonymous Coward

"At the moment the company is using four distinct wavelengths of light to generate 50Gbps in interconnect capacity, and is looking at moving to eight to get to 100Gbps."

Wow, so we have 100Gbps on Ethernet with 400Gbps coming soon. Nothing says they need to use Ethernet, but the wavelengths being used can be.

0
0

Exaflop may be the current goal, but Intel's already produced two wottaflop processors: the iAPX32 and Itanium.

6
0
Anonymous Coward

"two wottaflop processors"

I believe at this point the modern trend requires me to say:

ROTFPMSL

and

You win the Internets.

Do you own the rights to that, is it sublicenceable?

Either way, thank you for much merriment. Please come back frequently

ps

isn't it usually called iAPX432? But that's OK, my tryping's carp too.

0
0
Thumb Up

Classic

0
0

You forgot

the 186, 286, 860, 960

1
0
This topic is closed for new posts.