Daryl Brach, known as pfaffen online, has built a scale model of the Cray-1 supercomputer to house a PC. Cray-1 supercomputer The Cray-1, (pictured above) the world's first supercomputer, was launched in 1976. It was rated at a peak performance of 250MFLOPS and had "200,000 integrated circuits, 3,400 printed circuit boards, …
That's pretty damned cool!
Probably has a better graphics card.
Probably more powerful? Definitely more powerful, by at least a couple of orders of magnitude. Well, it will be if he's using *nix. Even if he's installed Windows then he should still be able to get, oh I don't know, at least the power of a ZX80 from it. Perhaps even a ZX81.
Stop besmirching the good name of ...
... the Sinclair ZX80/81!
You'd think so but...
...it was once explained to me that supercomputers are, due to the nature of the way they operate, very difficult to compare to 'standard' computers because it isn't just a case of counting the calculations per second.
In fact, as it was being explained to me I realised that I didn't understand a word and had to stop them, but maybe some other more sympathetic commentard could help?
I see what you did there! You implied that Windows has high overhead and thus Windows computers can't do anything at all! Oh my goodness, I think I'm going to tip over from the novelty of it all. But wait - don't you mean "Winbl0Wz by Micro$haft"? I'm not sure your point is clear enough.
they're very specialised machines built for very specific jobs
in the case of crays IIRC matrix mults supported by dot products of vectors [*] up to 64 floats in length (and its float arith was IIRC quirky). It bypassed main memory if it was thus programmed, thus saving a memory round trip, and probably loads more stuff I don't know about (and I may have got some of the last bit wrong too. I don't do HPC)
If you want to do this then it was good (IIRC 80% of cray works was typically matrix mults), if not then it was poor fit. Think of it as an F1 car - not good going to the shops in a village but on straight roads with the right surface etc. it works great.
[*] dot product of two vectors is taking to lists of nums of the same length and multiplying the corresponding entries and adding them up, hence
<1, 2, 3> . <4, 5, 6> = 1*4 + 2*5 + 3*6 = 4 + 10 + 18 = 32
Pretty much, yes
I'm not a HPC guru either, but I do have a couple of SGI boxes, and you can't have those without learning about picking the right computer for the right job.
For instance, the R8000 MIPS processor was at the time scorchingly fast at floating point and featured a large level 2 cache, making it ideal for numeric analysis. At between 75 and 90MHz it could be up to ten times the speed of x86, but integer performance was poor.
Likewise, if you take the Indigo vs the O2, the Indigo features some fairly fancy (for the time) hardware OpenGL support, whilst the O2 does most things in software. As long as a fully speced Indigo kept within the hardware constraints (geometry and lighting limits) it'll beat anything but an O2 with a top end processor. Go beyond those limits, or play to the O2's strengths such as the unified memory architecture that let it use all of the main memory (up to 1GB) for textures, or the ability to do real time full frame compression via the ICE chipset and it gets trounced.
Cant wait to see it put to use in the Tardis
"is probably more powerful than Cray's original near 40-year-old design"
Given that a single modern Intel chip is going to rate at quite a number of GFLOPs and can address many GB of much faster memory, it's going to take a design of stunning awfulness not to be faster than a 40 year old Cray. Indeed you might even be able to code a Cray emulator on a modern processor that runs faster than the original, albeit that the CRAY-1 architecture is very different.
So there's a project for somebody - code up a CRAY-1 emulator and see if you can make that faster than the original.
A design of stunning awfulness
Nah, they scrapped Vista. Windows 7's not so bad.
IIRC the full main memory fit on a Cray-1 was 1Mwords of 64 bits, IE 8MB and no addrerss translation to slow things down (So if it don't fit in memory or possibly with overlays it don't run).
Actually seeing how far up the Cray family you can go would be quite an interesting excercise, depending on your starting processor. What does a rock bottom Pentium (is single core still available?) motherboard manage?
Now getting hold of a copy of the OS (I'm thinking it was GCOS but not sure. I know there was a Unix or Unix like port) to run on it is likely to be a real challenge.
it was UniCOS, not GCOS
The Cray ran a variant of Unix (probably based on sysVr3).
GCOS ran on the front-end machines, which were probably Honeywell DPS-6 machines. These ran the user io and job scheduler, IIRC.
i WANT one
this too fucking cool for words. where are they sold?
imagine the fun if somewhere like pissy world has these on the shelfs.
...but will it blend?
You've got to wonder - when these things were in use, was anybody ever actually *allowed* to sit on the multi-million dollar wonder machine?
I would expect my wifes vibrator is more powerful than a 40 year old Cray!!
Some 40 year olds are proper goers.
£10 for five minutes to Seymour.
Yeah but I bet the Cray doesn't smell as bad.
A couple of years ago,
I think we were told that a PC programmed to break the Enigma Second World War codes was doing well if it matched the performance of the original 1940s hardware. Give or take replacing the valves as they pop.
Modern-type generally programmable computers obviously are useful, but designing a machine with the same number of transistors - or valves - to do one particular computing job, a custom microprocessor, is vastly more efficient.
This, I suppose, is why graphics cards are so expensive, and why people buy them. They do one thing and they're very very good at it.
Early IBM PCs had a maths co-processor option too, latterly they were built in, that was from Intel i486 on, I think. And they were still selling cheaper chips minus the co-processor (i486sx or was that something else?) which were said to be just the normal chip with the co-processor being faulty or even just disabled. Sometimes (I think) you could hotwire the chip and make the built-in co-processor work after all (mostly).
Nice story - but not true
If you google around you can find plenty of bombe simulators on the internet. These days you can outperform the bombes by a couple of orders of magnitude. Don't forget, all a bombe did was use a crib (an estimate or guess of a plaintext), then run through all the possible rotor positions and find out which rotor positions were feasible given the cyphertext and the crib. The bombe finds feasible settings of the rotors by a brute force search through them. In a 3 rotor enigma that is only 17576 positions which on a PC you can do in less than a second. The 4 rotor enigma only has 456976 possible rotor settings, which again can be brute forced by a PC very quickly. Remember that the bombe did not help finding plug-board positions, mainly because they are actually pretty easy to figure out by hand once you have a full message decrypted with the correct rotor settings. It was the plugboard that caused the explosion in the number of total keys in the enigma.
Graphics cards are a little different. They (amongst other things) are designed to scan-line convert triangles very rapidly, in parallel. You can do this acceptably on a general purpose CPU, but the simplicity of the task, combined with the ease of parallelising it makes it a great candidate for special purpose silicon.
Might have been the 486DX, I have this book so will reference!
The author has his own forum, worth the price of the book alone.
I suspect you mean the Colossus which was used to crack Lorenz, not Enigma.
Cray1 is not the first supercomputer.
Cray did design the first supercomputer, but it wasn't the Cray1. The first supercomputer was the (RISC before the word existed) Control Data Cyber 6600. When he lost out on the design on their next product, where he basically wanted to double the then latest 7600 to two functional unit processors but lost to Thornton's vector processing design of the Star 100, he left and built the vector Cray1 (with financial backing from CDC). Ironically, the Cray1 outperformed the Star for vector processing for all but the largest datasets.
more powerful than ,,,
And uses a damn sight less power. Seymour's patents were for the heat pipes on the circuit boards to prevent the beast from melting. And perhaps the robot that did the 60 miles of wirewrap on the backplane.
Now, can we argue about the quality of the Fortran compilers available for modern PC ? Never mind the old beast's pipelined vector architecture, what the new guys do not have, Just superscalar multiple execution.
FORTRAN -- not just for PCs
I was looking for a fortran compiler recently (had to compile Octave as a library, yes I have a good reason, officer) and in my searches found traces of someone looking for a fortran compiler for the iPhone.
Just a PC?
Now, really, what fun is a computer that size if it doesn't actually do something with all that space? Where are the tremendous, roaring power supplies with fanblades that'll whack your fingers off, or the power cabling that might well leave smoking and twitching any fool who didn't mind the terminals? Where's the neat architecture designed from the bottom up for heavy-duty vector-smashing? That sort of thing is half the fun of big iron. And what's a thing like that even doing speaking the vulgar tongue of an Intel architecture? If it looks like a Cray, shouldn't it /be/ a Cray or at least something pretty similar?
This guy's out for nerd cred from the generation that knows no more than fungible whitebox Windows systems, who think that the beauty of a computer is only what they can see; yet, the majesty of a computer like that ought to be more than skin-deep. When you open it, you ought to be awed and realize that you are looking upon something that once gave rise to the word "supercomputer". Here, if you open it up, you're more likely to frown and say "Oh, it's just another PC. Oh, look, I can fit all kinds of things in this empty space." What's the fun in that?
Running speed of the colossus
"Design of Colossus started in March 1943 and the first unit was operational at Bletchley Park in January 1944. Colossus was immediately successful, and the Colossus – Tunny combination allowed ‘high grade’ German codes to be decoded in hours. This proved immensely useful during the D-Day landings. The parallel design of Colossus made it incredibly fast even by today’s standards, a modern Pentium PC programmed to do the same decoding task taking twice as long to break the code."
So that seems like it was from the late nineties. Processor speeds should be at least 64 times faster by now, along with significant speed increases in other areas.
Having said that, I don't know if the poor old pentium was given a crib, or if it just had to chew through the whole lot. That all said, I think it's feasible that the Cray-1 could still give a modern pc a run for it's money in certain operations.
One word - CUDA.
I'd think properly written software using that would beat Cray-1 and many other vintage supercomputers by many orders of magnitude.
Or is it just funny?
That the scale model probably has more processing power than the original is funny, but is it ironic?
does anyone else remember
when told steve jobs bought a cay to design the next apple computer, cray was quoted as having said aomething to the effect of (it was a long time ago), that's funny, I bought an apple to design the next cray
Funny or reasonable
A quick google found more of the story, including Jobs being the only one to ever walk in unannounced to purchase a Cray.
The 6600 ("Cyber" came later and was rarely used with the four-digit model numbers) was indeed the first to be called a Supercomputer. And Cray was an important contributor, but Thornton is usually credited as the designer. The 7600 was Cray's "answer" to it.
As for FORTRAN compilers, the CDC and Cray compilers were very good, but take a look at what Pathscale is (still) doing.
As for Star, the most bizarre descendant of that was the LN-cooled ETA machine.
Not that bad...
Cray 1 would spank a ZX81... I recall reading it was similar in performance to something like a 386-25. (386SX or 386DX? I don't know hahah...)
"no addrerss translation to slow things down"
That's for sure! This was something Cray was *adamant* about, the Crays did not have memory management until like the 1990s, because the address translation, the page tables, and so on, are a performance hit. I was told in comp sci in a computer architecture class it was somewhere in the vicinity of 10%; later processors have reduced this via on-chip page table caches and such but it's *still* a few percent. Cray new people buying something like a Cray wanted speed, speed, and more speed, and consciously left everythig out that'd slow it down.
@Jimmy Floyd & BlueGreen, yes for sure. Cray was big on vector processors.. the Cray CPU was advanced, Cray implemented what is considered the first pipelined, superscalar (2 instructions per clock) CPUs in the world, the CDC 6600. Then after CDC said Cray's suggested successors were too costly, he left to form Cray. So the CPU on the Cray was strong, but the vector processors were where it seriously got it's speed. This is similar to MMX (or SSE or whatever) on Intel CPUs though, where it's great for a few tasks and useless for the rest.
The 386-25 performance was for the main CPU. A 386 would not do 250 MFLOPs, but the vector processors would 8-).
Nominal Cray1 performance was about 125MFlops, in theory you could double that with parallelism if everything lined up (and the moon was in the correct phase).
By comparison this i7 can supposedly do 110Gflops and if all the parallelism worked the Nvidia graphics card in it (GTX480) can do > 1.3 Teraflops .
Funniest computer comment I ever had, on a run report from a Cray YMP.
"The cray reported memory errors during this shift - please check your results"
What by hand? If I could calculate the results any other way - I wouldn't need the Cray.
I would like to remind that vector systems were usually front-ended by scalar systems that worked with the vector machine. The combined super-computing complex is far beyond the capabilities of the primitive system architecture of "PC". They were supplanted by computing grids for cost reasons-not performance shortcomings.
And yes --- people did sit on the cushions. I was in a group photograph for a completed installation.
Can we ...
... build a Beowulf cluster of these?
Use an Apple to build a Cray
I saw a Cray 1A at the Science Museum when in London a couple of years ago. According to the blurb, Cray preferred working with simple tools - eg a pencil and paper. Supposedly, when he was told that Apple had bought a Cray to simulate their next desktop, he said, "Funny, I'm using an Apple to build the Cray 3"!
I also recall hearing somewhere that the "seats" around the Cray got quite warm and were occasionally used by technicians who smuggled their girlfriends in on lonely nights for some, ahem, companionship ...
GPU > Cray 1
OK Modern CPUs may or may not be faster for the dedicated tasks the originals were designed for but I wonder how the modern parallel cored GPUs (nVidia et al) stack up against the Cray-1 or Collosus for vector processing? 127+ cores with hardware support for vector & matrix processing, and usually around 1gb dedicated GPU memory...
Any Cuda coders out there want to knock up an app?
..look a little too padded to me, you might fall asleep.
Did anyone get the feeling with the original that someone had cut a piece out or that part had been hauled away for servicing. I always felt a bit was missing and that maybe if it were put back it would go faster.
The only irony I see is how much more cycles we waste. They've become too cheap to care about using them meaningfully. This has been the defining trend of computing for, well, ever since it went mainstream. You know, there's still people entering "demo compos" with nothing but a C64, and they do amazing things with that old, hopelessly underpowered kit. Now, you say, yes but we have better gear now. Indeed, and we've gone lazy. What we achieve with the hardware doesn't keep up with what it _could_ do when pressed; because it can do so much more we're too lazy to press it.
And yes, specialisation is why that really old gear can give much more recent general hardware a run for its money. Not too long ago a physics department figured they could buy a super for some seven figures, OR build an apparatus out of DSPs for five figures and run the application faster. This is nothing unusual. Having such hardware convieniently come with even low-end graphics cards is a bit of a novelty as it means much wider availability. Though it's good to recall that just about all video card manufacturers at one time or another cheated for higher benchmark results, so I wouldn't be too quick to accept the crunched numbers as whole and correct.
Oh, time for a plug (disclaimer: not affiliated in any way): These guys ran cray and cdc gear and will again when they finish moving: http://www.cray-cyber.org/ Sign up and run your programs on a real cray for some of that olde hands-on experience.
been saying that for bloody years...
"when the hardware doubles,the brain loops needed halves". No one needs to get cunning on tighter and better defined code anymore, just push for a bigger discount on faster kit.
What is fast?
They Cray started out doing 80 MFlops but those where vector flops like modern graphics hardware except the Cray didn't drop precision. The Cray had a unique (even to this day) ability to figure out 0 *N took fewer cycles than a multiply of any other pair of numbers.
- JLaw, Kate Upton EXPOSED in celeb nude pics hack
- Google flushes out users of old browsers by serving up CLUNKY, AGED version of search
- China: You, Microsoft. Office-Windows 'compatibility'. You have 20 days to explain
- GCHQ protesters stick it to British spooks ... by drinking urine
- Twitter declines to deny JLaw tweet scrubdown after alleged iCloud NUDE PHOTOS hack