Notes on Titan
First, it is "Oak Ridge National Laboratory".
And, "Titan" could be one of the Tennessee Titans NFL team ;)
The “Fastest Supercomputer” title may move 6,940 miles (11,167km) eastward in 2012 from Kobe, Japan to a small Tennessee town. That’s if the folks at the Oak Ridge National Labs (ORNL), along with Cray and AMD, can pull off a massive upgrade of the existing Jaguar system. They’ll replace existing nodes with the new Cray XK6 …
100 processors at 1 MFLOP (million floating-point operations per second, in case you didn't know that) never equals 100 MFLOP, and only approaches it if the system is, in fact, executing 100 different programs at the same time.
For a single program sharing the 100 processors, some speed advantage is possible if the the task being performed can be divided into sub-tasks which can run simultaneously. Graphics processing is one important example of this - indeed, the ideal GPU would contain one processor per pixel - which is why we see companies that specialised in designing GPUs now moving into high-performance parallel computing.
But the inconvenient truth remains that most computing problems are linear - an operation cannot be performed on data that isn't available until the previous operation has finished. Moreover, the operating system required to support multiple processors is necessarily more complex, and therefore slower, than one which simply schedules tasks for a single processor, so the task which can't be parellelised actually runs slower on multi-processor systems.
Am I right?
"Moreover, the operating system required to support multiple processors is necessarily more complex, "
Yep.
"and therefore slower, than one which simply schedules tasks for a single processor"
Not necessarlly - not in any real world sense.
"so the task which can't be parellelised actually runs slower on multi-processor systems.
Am I right?"
Finger in the air - it may run at the same speed, or a bit slower. Kind of a moot point however, as the sort of computations run on these things are often embarrassingly parallel or, at the very least, amenable to parallelization.
"...Kind of a moot point however, as the sort of computations run on these things are often embarrassingly parallel or, at the very least, amenable to parallelization..."
Yes, and because Linux is very good at horizontal scaling (scales good in a cluster) we see that Linux is often used. Workloads that suits clusters is what Linux is good at.