The “Fastest Supercomputer” title may move 6,940 miles (11,167km) eastward in 2012 from Kobe, Japan to a small Tennessee town. That’s if the folks at the Oak Ridge National Labs (ORNL), along with Cray and AMD, can pull off a massive upgrade of the existing Jaguar system. They’ll replace existing nodes with the new Cray XK6 …
Notes on Titan
First, it is "Oak Ridge National Laboratory".
And, "Titan" could be one of the Tennessee Titans NFL team ;)
6,940 miles (11,167km)
Oh, I can't be bothered to convert these weird units - how many brontosauruses is that?
100 processors at 1 MFLOP (million floating-point operations per second, in case you didn't know that) never equals 100 MFLOP, and only approaches it if the system is, in fact, executing 100 different programs at the same time.
For a single program sharing the 100 processors, some speed advantage is possible if the the task being performed can be divided into sub-tasks which can run simultaneously. Graphics processing is one important example of this - indeed, the ideal GPU would contain one processor per pixel - which is why we see companies that specialised in designing GPUs now moving into high-performance parallel computing.
But the inconvenient truth remains that most computing problems are linear - an operation cannot be performed on data that isn't available until the previous operation has finished. Moreover, the operating system required to support multiple processors is necessarily more complex, and therefore slower, than one which simply schedules tasks for a single processor, so the task which can't be parellelised actually runs slower on multi-processor systems.
Am I right?
How much *****in useful work gets done
More Petaflops and Linpack bollocks, so the head of the academic/gov', facility can brag to his politician.
How much, useful, stuff, i.e. papers, gets produced.
The world is shagged and you report this, future, possibility!
"Moreover, the operating system required to support multiple processors is necessarily more complex, "
"and therefore slower, than one which simply schedules tasks for a single processor"
Not necessarlly - not in any real world sense.
"so the task which can't be parellelised actually runs slower on multi-processor systems.
Am I right?"
Finger in the air - it may run at the same speed, or a bit slower. Kind of a moot point however, as the sort of computations run on these things are often embarrassingly parallel or, at the very least, amenable to parallelization.
"...Kind of a moot point however, as the sort of computations run on these things are often embarrassingly parallel or, at the very least, amenable to parallelization..."
Yes, and because Linux is very good at horizontal scaling (scales good in a cluster) we see that Linux is often used. Workloads that suits clusters is what Linux is good at.
Will it play Crysis 2 in Hi Def 3D???
I don't visit slashdot any more so
"Just imagine a beowulf cluster of these things!"
AMD CPUs with nVidia GPUs?
Vote of "No Confidence" in what's left of ATI, then?
- Tricked by satire? Get all your news from Facebook? You're in luck, dummy
- Google straps on Jetpac: An app to find hipsters, women in foreign cities
- Updated Microsoft Azure goes TITSUP (Total Inability To Support Usual Performance)
- The Return of BSOD: Does ANYONE trust Microsoft patches?
- Munich considers dumping Linux for ... GULP ... Windows!