We'll hand it to IBM's researchers. They think big - really big. Like holy-crap-what-have-you-done big. The Register has unearthed a research paper that shows IBM working on a computing system capable "of hosting the entire internet as an application." This mega system relies on a re-tooled version of IBM's Blue Gene …
This rocks. I've been wondering if this type of thing was worth doing. Apparently IBM decided that it is. This looks so ultimately cool I think I will probably spend my off hours in the pub trying to read up on how this all works, that and see if I can tap folk who have IBM contacts to see if I can find out more. Ah well, momentary geeky rapture over :P
Argh! Now I can't help wondering if these new beasties can be clustered! Obligatory beowulf moment!
the one important thing the Internet was created with was to have multiple nodes so that a single crash didn't wipe out the entire Internet. Hopefully, IBM isn't stupid enough to try to actually run "the entire Internet" on a single supercomputer. I can see it now...one rogue program, and the *entire* Internet crashes (including local stock markets). Truly brilliant.
Also, if government servers (including military ones) are attached to the Internet, they are a *part* of the Internet, and thus would fall under the term "the entire Internet.
Maybe the author of this article would like to quiz IBM on just what constitutes "the entire Internet", and where its bounds are. You know, just as a sanity check. Imagine the aforementioned crash scenario, except that several countries' military networks crashed with it. Can you say World War 3?
Waiter...sanity check, please...
that's all well and good, but will it run Crysis @ 1680x1050 4xAA and 16AF?
Kevin, I'm going to stick my neck out here and say that IBM aren't actually offering to destroy every server centre in the world and incorporate the entire internet into a shiny box at IBM headquaters.
The point is that the machine they're talking about *could* handle the amount of data and processing involved, thus slightly smaller versions could handle some ruddy giant operations, like Google search.
Waiter, "Taking things far too literally check" please...
Writing for PPC
Since when is web-related open source software written for a specific architecture? I could easily run all my sites on PPC if I wanted to, I just don't because x86 servers are more readily available.
a couple of things
1. Note to editor/author: I noticed the top500 list seems a little out of date; RANGER.
2. if my memory serves me correctly, several years ago, there was already research being done that, i guess, could be summarized as "tcp/ip on a chip". as i recall the idea was to incorporate the fault tolerance, etc of tcpip for on-chip communications between system-on-chip components. Perhaps a scaled-down version of this ibm proposal?
...linux microkernel... ?
Andy Tanenbaum must be spinning in his grave.
"But Andy Tanenbaum's still alive!"
"This'll kill him."
Could apply to Sun?
I just wonder if Sun could apply at least the design principles of IBM's solution to their own hardware/software stack? I mean, they are already brewing SAMP (Solaris, Apache, MySQL (which they own now) and PHP/Python). Just remove PHP and run Ruby on Rails and JAVA...('nuff said about Sun and JAVA) utilizing the ZFS file system for management and DTrace for hunting down Agent Smiths that might begin to plague the systems, all on dozens of backplanes filled to the gills with Rock processors. Even if Sun couldn't quite match the brute strength of IBM's Power machine, they could make a hand-waving argument that you would get 85% of the computing power with a 150% decrease in electical power requirements. (These numbers are mere speculation on my part....I just know how power hungry a Power chip is compared to Sun's estimate on the Rock chip)
All Sun's marketing would have to do is say to a customer "Hey, we designed the operating system (Solaris), we created JAVA, we own MySQL, we have a vastly superior file system in ZFS and a bitchin' debug tool in DTrace and we can save you tons in electrical costs to power the beast and to cool it. IBM justs tweaks Linux, doesn't own MySQL, doesn't own JAVA, would have to shoe-horn ZFS and DTrace into Linux and would cost you and arm and leg to pay for the light bill. And oh...by the way....we sell the whole kit, hardware, software, and service plan for less than IBM." I think it could be compelling play for SUN.
It's not easy by any means to parallelize software. Linux applications have never been that good at utilizing parallel hardware. Industrial database systems like Oracle and MS SQL Server are among the very few that do.
Look at what people have been doing with Unisys servers for a long time now to see what really works on parallel hardware.
Calm down dear and read the article again….
The Register has unearthed a research paper that shows IBM working on a computing system capable "of hosting the entire internet as an application."
NOTE the word of importance here “capable”
Most of those web-server applications are absolutely unoptimized. I mean Perl, Python and PHP aren't compiled languages. Apache is a piece of large and complex software.
One extreme example of what you can do is Opentracker. It's a Bittorrent-tracker written in highly efficient C. I know C is probably one of the worst languages, especially when it comes to string processing, but they seemed to have made it. I think in their talk they mentioned they could serve up to 90000 tracker requests per second with no noticable CPU utilisation of the software. Other projects use trackers in Python, or some with Apache, PHP and MySQL.
So I guess, unless you already wrote your web-services in highly efficient C or Pascal, it would be way smarter and cheaper to just get your software rewritten. I mean you can get a competent programmer for a month for less than 2 servers cost.
RE: ...linux microkernel... ?
Actually I would think that Andy would be pleased, it's Linus who is getting his clock cleaned here. Maby Tanenbaum is right about how to do it so it can scale to these proportions. IANACoder but the idea of "self healing" and keeping all drivers etc. in user space so it doesn't crash, does sound more solid. Perhaps Linux as we know it with the macrokernel will eventually be left to just rule the desktop.
OT: Is it just my imagination or does Minix3 seem to be stalled. Apart from MPlayer 1.0 rc1, comming online in December, most of the action seems to have stopped at the beginning of last year.
Perhaps someone over there aught to think about centralising a couple of these things to cater for their elections. One in each camp instead of a Diebold or set of Diebolds in every voting station.
That way when the inevitable fraud charges erupt both parties would have access to the computers. Be nice if they were in flood/hurricane/tornado/earthquake proof centres too.
Somewhere in England perhaps. OK not England, at least US IT tends to be secretive. If it were in England everyone with access to a bug would be in there. They have the freedom of information act and we just have an act.
Ticket 42 Ride ........... Helter Skelter Revisited.
"It is interesting to note that once the cost has been paid to parallelize a workload, the performance of individual nodes becomes irrelevant compared to overall throughput and efficiency." ..... which would then only be really, a virtual cost, as it is really an asset/vital component.
And the parallelized workload manager would Realise the implications and opportunities in that irrelevance. For it allows Privileged Access to the IBMCore.
Stands a good chance...
...of being able to run Vista, I'd say.
are you aManFromMars in disguise ?
So, rather than trying to live down the famous Watson quote on the world only requiring five large computers, IBM are working to make it happen?
You've got to admire the lateral thinking and long term planning at work here....
Does this mean that pictures of Paris will download faster?? :-)
One system versus distributed
Until we have quantum connections between systems the physical fact is that one interconnect system will be more efficient than any distributed. Business wise it may not always be true but if you need raw power it is. About that parallel software, where Oracle and MS SQL were mentioned in sense of distributed databases, Oracle is decent, MS not so much yet and compared to HP NonStop (aka Tandem) SQL systems they pale. Now, of course HP NonStop costs a lot more but also gives more. Anyway, we are not talking databases here but pure, raw information processing, a big difference. IBM has always been good on that, the question is really not how fast one processor is or how fast you can make one query but how much information you can process in a time frame.
"multiple nodes so that a single crash didn't wipe out the entire Internet. Hopefully, IBM isn't stupid enough to try to actually run "the entire Internet" on a single supercomputer."
Firstly IBM don't plan on actually running the whole internet from one box.
If they did however there can't be many boxes better than BlueGene for running it.
The redundancy is massive, these things never ever crash and that's running them as effectively one large processor. Running them with each processor doing it's own thing would add huge levels of redundancy to the BlueGene as a processor could crash but just be picked up by one of the other 67.1M cores (as google does with it's clusters at the mo).
I did a little work for the guys who were maintaning the New York Stock Exchanget supercomputer (it had just retired at the end of the 90's having spent 30 years running on average at >75% capacity without a single minute of downtime).
That was a 70's machine with a fairly ropey proessor (Alpha). Nowehre near the sophistication or redundancy of a BlueGene. I
The point is actually...
IBM have been working on "On Demand" computing for ages - take AS/400 or iSeries as an example. The ability to call on "more power" when you need it and only pay for it when you need it.
Having worked in the retail industry, I can tell you that for 6-8 weeks every year we NEEDED an iSeries i520: the rest of the time it was twiddling its thumbs somewhat... This project IS about saving power and money by buying a machine more than capable for everyday needs but also one that has the ability to scale to whatever demand is placed upon it. Think data centers and Application Service Providers. Remember when ASP's were springing up every week in the late 90's, well now we actually have affordable bandwidth to support this, or Software As A Service. Microsoft has been wanting to go the SAAS route for a while now and this sort of technology empowers it.
As far as the comments regarding "wasn't the internet meant to be a multi-node network..." It still will be! There would be a few of these monsters dotted around the world, thus creating a much more dynamic and scaleable network of server capacity than we currently have.
On the Sun and parallelism aspects... Yes, Sun do make great boxes but they do have a habit of being a bit Apple - that is not really working well with anything else. I once had a problem where I couldn't get a Sun box to send mail using PHP. The cause was the stdio library. The solution: to compile and install sfio (an IBM library) instead. Unfortunately that was against the AUP of the provider, so we moved to a Linux box instead. Parallelism is a very complex problem. Yes, parallelism of interpreted languages is a problem. But seriously, we are not looking at this happening tomorrow. PHP ver. 8 could be moved to a compiled language which would not only make it faster - dare I say it less susceptible to security issues or bugs?!?!
I think that IBM are perfectly suited to be the ones undertaking this research. After all this looks to me to be right up their street. iSeries has had Logical Partitions capable of running disparate OSes for a while - OS/400 or i/OS simply abstracted the layers and provided the control of resource. They have ample experience of Linux and have been advocates of Linux for a long time. They are also the mainframe and super computer kings. Roll on!
The Inmos transputer was probably the first microprocessor design for large scale multi-core systems: see http://en.wikipedia.org/wiki/INMOS_Transputer and this quote: "In fact, the most powerful supercomputers in the world, based on designs from Columbia University and marketed as IBM BlueGene, are nothing less or more than real-world incarnations of the transputer dream."
We are still waiting for a real world incarnation of the Inmos concurrent programming language Occam: http://en.wikipedia.org/wiki/Occam_programming_language
Just Do IT .....
Theoretically it is Perfectly Possible for one Virtual Machine/Global Operating Device to Run [with] the entire internet, whenever IT is Plugged into the Grid and Communicating. All IT needs to do is 42 Provide Attractive Content for Any and All Services to Migrate to ITs Stores and Tales of Story Networks.
And with the Simplest of Wireless Operating Devices, IT wouldn't even need to be Plugged In to Dispense the Wealth of such Wisdoms.
And Paris because her Story is not Gory or Hoary and more Giving than Whorey and those are a InterNetworking Network to Savour with Favour.
"We are still waiting for a real world incarnation of the Inmos concurrent programming language Occam: http://en.wikipedia.org/wiki/Occam_programming_language" ... By Richard Posted Wednesday 6th February 2008 11:20 GMT ..... What do you Imagine IT to look like, Richard,..... Sound and Grounded in Favours 42 Savour? :-)
Finally, the world will have enough power to find where the end of the Internet is! I can't wait to see what the end of the Internet looks like!
On the other hand, I bet Rick Solomon has seen something cunningly similar, so I'll pass.
The Price is the Problem
Despite all the hype, this stuff is so expensive, it will never catch on except with over funded beauracracies.
The Transputer and Occam used to be real world. In Mid to late 80s. Since then with dominance of x86, C, C++ (and to a lesser extent Unix/Linux/Solaris and NT parentage Windows and MAC OS which are all similarly stone age 1976 in Academic Terms) we have stagnated.
I played with Occam. Unlike C++, you can do most of what Occam does in Modula-2, Modula-3 and Oberon. They do genuinely support threading and MultiCPU in the Language, not just via *nix / Win32 APIs
Windows 95 and Linux have held IT back by nearly 10 years. All the worst aspects of Vista derive from the Win95 concepts, not the underlying NT architecture which is increasinly savaged by MS people that obviously don't under stand Cuttler's work. OS X, Solaris and Linux are 1976 OSes with post millennium glitter glued on. Windows Mobile and Vista are almost terminal.
67.1 million cores...
since 67,999,950 of them would be used to surf p0rn, this would have a very interesting browser cache...
Market share vs. market size
I can imagine the discussion with marketing...
Researcher (proudly) "We've come up with a computer that can run the entire Internet".
Marketing (orgasmically) "Ooh, ooh, that's incredible. How many of these do you think we can sell?"
Researcher (sheepishly) "Ummm, errr - one"
Marketing (rolls eyes, sneaks out of the room)
This is all well and good...
... but how secure will it be?
Will some clerk in the IBM Admin offices be able to mail the Internet out on a couple of DVD's?
The one with the infinite-capacity pockets, please.
Huh, are we speaking about Powerbook?
Apple abandoned PowerPC, a "toy" compared to real Power chips and those insanely high number multicore low power chips.
It doesn't make a single change in industry which IBM targets. IBM happily sells Power based Blades, real Workstations, Mainframes, Enterprise servers.
X86 was always popular compared to Power but we are speaking about "real deal" here, not some guys laptop or personal PC. It runs AIX or a massive Linux anyway, not like they will sit and install OS X.
I am not surprised that The Register sees Power architecture as a fading thing after Apple gave up, Ubuntu guys made same mistake and dropped official support. Guess what? Nobody in enterprise cares about Ubuntu.
Funny you should mention a sanity check, nutcase.
Do you mean that everybody in enterprise care about Yellow Dog?
The internet is not the web!
It seems like you're confused on that issue, or just exaggerating for poetic effect (and you're supposed to be a journalist, not a poet).
There's a lot more stuff going on on the internet than web servers. adnim's joke about Crysis is actually spot on - this thing surely won't be replacing the dedicated Crysis servers that are a part of the internet.
Re: The internet is not the web!
Please pay more attention to the bouncing ball.
You could have one machine per year of Internet.
Then you could surf the Internet and choose the year.
Could be fun and educational to surf the 2008 Internet in the year 2050
Processor architecture is a non-issue for web apps
From article: "but how many folks doing open source web work will write code for the Power architecture?"
Why should any of them care about the processor architecture? Web programming is done mostly with scripting languages and Java, which don't care at all what processor they run on, as long as their implementation is ported properly. And Power ports of all popular languages used on Linux have been in production use for ages.
Even most C code is portable with just a recompile, unless the programmer has done very stupid things, like union hacks with hardwired assumptions about the processor's byte order.
@Market share vs. market size
I think the idea would be (if you missed it and arn't simply jesting) would be that they wouldn't sell the machine but space on the machine. Connection, storage, bandwidth, applications, support, maintenance. That kind of thing.
I wouldn't be suprised if vast sections were virtualised to allow clients whole servers. RDP based thin clients... lol you could have whole offices running off of the thing if the office had the bandwidth.
You'd probably want three though - America - Europe - Asia.
Ah well it's an interesting idea.
This is just simple application level distributed computing...
This is exactly the architecture google was using in its entire life. Ibm just tries to make it available to everyone as a package, instead of the do it yourself design that google uses. From a computational standpoint, there is no difference between using an N core ibm computer and using N single core pc-s. (apart from the fact, that pc-s are cheaper)
Speaking from experience...
I have a direct experience of the clusters versus SMPs tussle in a large commercial organisation, and my unambiguous conclusion is that the TCO of clusters is generally greatly underestimated. There are applications where clusters are worth considering, but overall SMPs are by far the better bet. Yes you pay more up-front. You also save yourself heck of a lot of subsequent headaches.
@Aremmes - The End
You needn't wait any longer.
BEHOLD! - http://www.endoftheinternet.com/
"That was a 70's machine with a fairly ropey proessor (Alpha)."
Is that the Dec Alpha processor launched in 1992?
Enterprise really cares about YDL
YDL is one of the "Rolls Royce" choices for massively parallel high performance computing running on Power processors. IBM, massive parallel computing, thousands of power processors. Those aren't things for your usual "Awesome" Linux flavour which drops supporting CPUs based on fashion.
Why the sub-heading "Ruby on Rails on Rails on Rails on Rails"? RoR was mentioned only once in passing.
Teh interweb on one server?
I hope you'll be taking nightly backups
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base