Blade servers, virtualization software and fancy accelerators might be all the rage in the server business, but Google doesn't want any part of the hype. Google will continue crafting its own low-cost, relatively low-performing boxes to support its software-over-the-wire plans. The ad broker looks to focus on lowering energy …
No surprise here.
VMWare and friends are really useful for those who want to run different isolated applications on a single box. This makes sense especially when any of of those virtual systems on their own don't usually consume 100% of the resources.
On the other hand companies such as Google have a greater need for grid/cluster computing, where a single application needs more than 100% of the power available per machine.
What is the point of subdividing a system only to join them up as a cluster? It would be possible but generally not all that useful.
How soon we forget our history
There is nothing (historically) new here. Back in the dark days of the 1970's, AT&T built UNIX for essentially the same reason: they needed LOTS and LOTS of switches, and figured it was cheaper to develop a small, portable operating system that could be used to create a switch from any piece of hardware. AT&T went on to develope the "B" series of computers (1B, 3B, 5B, 10B and 15B) to allow them to deploy any size switch anywhere they wanted, and put UNIX on all of them. That UNIX also worked as a general purpose OS was a side benefit (part of the "D" in "R&D"), but the "B" series was serious "iron" in its day.
When I worked in banking during the 1980's many large banks did the same thing. At Security Pacific we had purchased a mish-mash of hardware from General Automation, Control Data and Interdata and built our own custom branch automation system, including our own highly customized version of the GA CONTROL operating system. MUCH cheaper, faster, and most important, USEFUL than any solution from IBM, Siemens, NCR or other "turn-key" vendor.
While Google's work is impressive by any standard, we in the IT community need to keep in mind what they do and WHY they need to do it that way. Custom technology ALWAYS has a place in the extreem limits of any human endeavor. We don't think twice about space craft having custom computer and sensor systems, nor do we think twice about custom-built construction equipment for road building or skyscraper construction. We don't even consider all the "custom" computers that are embedded in cars, trucks, dishwashers, microwave or cell phones - again, something that can't be done with "COTS" (Common Off The Shelf) solutions.
The only thing that makes Google catch our eye is the scale: whole DATA CENTERS that are composed of custom hardware for a single purpose. Yet, to my "old guy" mind, this is nothing I've not already worked with nearly a dozen times before.
The most important lesson we can learn from Google's endeavor is how Google thinks: if the box won't work, think up a solution outside of it.
(I should stop here...but...)
Our biggest problem today is trying to force solutions to our problems to fit into nice, neat "boxes" of products that we can conveniently buy off the shelf at Wal*Mart. (Or HP. Or IBM. etc.) Most companies won't even consider an idea that doesn't have more vendor logos on it than a NASCAR racer. Build it in-house, for one purpose? You've got to be CRAZY!
And in most cases it's NOT necessary. But, if you do your planning and design right, and you DON'T see a solution that fits, and the prize is big enough...anybody got a hammer to help me break this box?
And excellent comments from Brett Brennan above.
A redundant array of low cost computers
I'm not sure, but Yahoo could have been there first with their platform, which was huge clusters of BSD powered machines certainly eight years ago if not ten. A data-centric system with lots of parallel processing is the answer for any search engine, and getting the maximum amount of performance out of commonplace hardware seems to be the best way to get value for money. Google's approach has been pragmatic and revolutionary. Have a search for Google Filesystem, which is an entirely inhouse solution for the distributed storage of large amounts of data in a massively redundant way. This and the Googleplex are far more interesting to the engineering geek than the front end's voracious hoovering up of IP, and will probably be as important in the company's future development.
They don't need virtualization. Virtualization is when you try to run multiple disparate applications on one machine. They're one application on multiple machines.
It's interesting that HP is using blades instead of that space heater called a Superdome.
Horses for courses
Google's requirements are far different to the average company. In a large utility, we run over a thousand servers, but only 75% are physical servers, the rest test and prod VMware servers runnign on the ESX hardware. We had power limitations, and had to rent new datacentre space, and this kept our in house size/heat/power constraints down, as well as reducing rent on growth in the DCs. It also helps for eliminating old hardware on servers that the business can't get rid of - physical to virtual, and you elimiate issues with the physical almost entirely (still the odd service issue).
If it's right for your organisation, it's a real boon. But if it's not a fit, then why follow hype? Vendors will tell everyone it's for them. That's what IT is for - not to put in what's new, but what's needed
- DAYS from end of life as we know it: Boffins tell of solar storm near-miss
- Put down that Oracle database patch: It could cost $23,000 per CPU
- The END of the FONDLESLAB KINGS? Apple and Samsung have reason to FEAR
- Pics It's Google HQ - the British one: Reg man snaps covert shots INSIDE London offices
- Bose decides today IS F*** With Dre Day: Beats sued in patent spat