Thanks to $7.4m in government funding a pair of national labs hope to throw their big brains at the most pressing problems facing supercomputer designers. Sandia and Oak Ridge national laboratories this week touted their new Institute for Advanced Architectures (IAA), which will explore what it takes to create "Exascale" …
Torvalds goes for PC, Patterson goes for mainframe
Both Patterson and Torvalds have some good points. What it all really comes down to is what you want to accomplish.
Torvalds seems to be coming from the perspective of "what people want to do."
This does not seem to be the same thing as what a business entity or scientific organization want to do, but what individual users want to do with their computers. Certaintly, two cores are enough. (Just look at Intel's Skulltrail debacle!)
Patterson has the perspective that large organizations are going to be using CPUs with 100's of cores, and is trying to look forward to how to efficiently use 1000+ cores on a chip. Sun is putting eight cores on a chip with 64 thread capacity (T2), and Intel has produced a research CPU with 80 cores and 160 FPUs. That's exactly where things are headed.
What this boils down to is that Torvald's Linux isn't going to be running on a system with 100+ cores, and that's what cheeses Torvalds off.
...exaflop computers could crank through a million trillion calculations per second.
I wonder if it'll be able to handle file copying on M$ Vista....
Any idea when they'll be hitting the shelves?
Gimpy little f'tard.
Proof, if more were needed, that Linus isn't actually some kind of demigod, but is, in fact, simply another mediocre geek who /thinks/ he is.
The really sad thing is that the Children Of Linus will now accept his gospel as the One True Way and continue to believe that the best way to do big iron computing is just to get a couple of thousand linux boxes and stack them in a big pile. A clue : no.
Of course once the exascale computing platform is perfected, they'll all say it should be running some version of gimpux. And if anyone so much as dares to file a patent on any of technology they spent squillions of dollars developing, they'll be round with the burning torches. Well, no, actually they'll just write impotent whining blog entries about it and quote Stallman and Levy just like they always do, tedious little f'tards.
Mines the one with the asbestos lining, ta.
Resistance is futile.
IO problems are belong to dead shunt.
I always thought digital was the dead end of computing. You can fit so much more onto analogue.
Linux is not the total of OSS
Linux is one of many OSS operating systems. i would think NetBSD is at least as portable; OpenBSD is certainly more free in the FSF sense, and likely more secure. some form of open source UNIX (looks like the BSDs or OpenSolaris, at the moment) will find its way onto the exascale platforms, simply because it will be so tweakable that it makes a good testing and development platform. if it isn't Linux, i'll be just fine with that, i like the BSDs even better for some purposes (FreeBSD, OpenBSD, NetBSD, Dragonfly, etc.).
Ballmer will likely be much more annoyed when that happens, and there will once again be some chair-throwing, probably at a former Yahoo! facility this time. if Linus is peeved, that's fine. neither one can supposedly hold a candle to Theo de Raadt (lead of the OpenBSD project), who reputedly eats normal people, and excretes secure, peer reviewed code.
i don't care about personalities, i care about RESULTS.
@The Other Steve
The problem with Linus is that he really lacks perspective...there is an assumption that just because he is such a well known name, that he has deep insight into the way people/groups/governments actually use computers. Reading his side of the debate, I honestly feel he just doesn't have a handle on large scientific computer, or even very large database issues. (I've participated in commercial work on CM-5s, SP1 & 2s, nCubes, etc., so I will claim some experience in this stuff).
The issues of data locality, high speed switching networks, and parallel software patterns are very real, and there is still a great deal of research to be done. The heartbreaking fact of course, is that so much of what gets done resembles the same stuff we were dealing with in the early '90s - i.e., no real breakthroughs yet. Evolutionary changes, yes, but no real breakthroughs.
But certainly hoping that a big pile of commodity parts is going to solve it is rather unrealistic. The only people that have a handle on that approach are Google, and only for very specific problem sets, and only using masses of brainpower to solve each programming pattern. Hardly a generallsable model, at least yet.
Where is the icon of Linus with horns? I'd best just batten down the hatches, cause I'm sure the linux faithful will be beating down my post forthwith...
The Eternal Return
With all due respect to Unices and the Linuces, their lineage is dominated by mainframe thinking: a single, highly valuable CPU being divided up amongst multiple tasks to obtain maximum utilization of a scarce resource.
The price of a CPU plummeted, but few people have shaken off the old assumption that CPUs are still best exploited as shrunken, multiuser, multitasking mainframes with all the baggage that this implies. Symmetric MultiProcessing hardware has been accomodated to some degree by improvisation on the monolithic OS designs, but multicore architecture will look a lot more like shrunken PCs on a network, suitably simplified for on-die networking etc, but with increasing amounts of private, per-core RAM that SMP doesn't really address.
Trying to take advantage of massively multicored hardware while dragging single-processor and SMP baggage along will necessarily produce its share of backward-looking monsters and things indigest. The recent claim by Stonebraker et al that 10X to 20X improvements in database performance may be had is based on the assumption of a (gasp!) single threaded database application running in a dedicated core with gobs of dedicated RAM. This bolshevik approach to the application versus kernel threading debate assumes that we will soon be living in a one-thread-per-core world, at least in terms of application design. Although some "housekeeping chore" cores may multithread at the application or the kernel level, new designs like the H-Store will throw out almost a half-century of mainframe-think and seize an entire processor for themselves without inconveniencing other applications on the die. (Just think of all the context-switching overhead that will suddenly disappear.)
What will the software infrastructure for these dedicated cores look like? My guess is that the winning software architecture will be microkernels churning away in their individual cores, loosely coupled with each other via a message-passing system. Although this approach exacts a price in terms of message-passing overhead, it more than repays it in parallelism and scalability unobtainable with the monolithic and SMP approaches.
I've seen the future, and it works: the Tandem K-Series, designed in the 70's, pioneered the massive commercial application of loosely-coupled message passing kernels. For about two decades, the K-Series (and later the S-Series) processed most of the worlds credit and debit card transactions, and powered major stock exchanges as well. (For all I know, they still may.)
Just as CPU architecture evolved from simple to complex instruction sets and fell back to reduced instruction sets in response to hardware evolution, the time approaches when OS and application architectures may experience a similar return to their roots as well.
It will be running Linux!
Linus is there because whatever the final solution is it will be Linux based.
Nearly all current super/parrallel solustions currently in place use some variation of Linux -- its there, its cheap , and researchers can tweak the source code -- and to a large extent the OS doesnt matter thet much.
Its much more imporatant to get the architecture right and to get the various Fortran libraries that support the architecture right. The OS that schedules the threads is a neccesarry technicality.
@The Other Steve
Funnily enough, if you look at the development of the top 500 list of supercomputers, it is clear that clusters have been very busy displacing mainframes over the last five or six years.
Optimizing the way a mainframe handles tasks might do something about that, obviously. Nevertheless, the mainframe vs. cluster debate has little to do with ideology in real life; the real question is how much oomph the buyer gets for his money. In some scenarios, that calculation can even lead to solutions like a room full of Macs in racks. Go figure.
Meanwhile, very few high performance machines run on anything except some flavour of GNU/Linux or one of the Unices...
Building for Redundancy and/or Sub Contracted Use.
They do say that size doesn't matter, and it is not what you've got but what you do with it. And with computing it is very much exactly the same....... Garbage In, Garbage Out ..... and the bigger the machine the bigger the problem.
And in computing, just a bit of Dodgy/Perfect Code can infect the whole System.....rendering IT a Proxy Bot under Virtual Off-Site/Outer Space Control.
...exaflop computers could crank through a million trillion calculations per second.
Yeah and the damn thing will still take several minutes to boot and load in the user profile and registry hive.
No the future is 5 million threads on a single die.
The future is everything on one chip.
We will all log into one chip.
The chip will be called "The One"
Everything is better if it's all one chip. It's the one true way.
All the electronics in my car will be one chip.
My house will all be one chip.
I will have one chip in my pocket.
www.theregister.co.uk will be one chip.
"The One" will be designed by a team of 10 guys in Thailand.
Memory will always be 100 chips though. More chips is always better. unless it's cpu chips.
So: one chip + 100 memory chips.
It is right. It is the way. So it has been said. So it shall be done.