It is with some measure of awe that we introduce you to IBM's iDataPlex server. The system itself is quite remarkable. IBM has reworked its approach to rack servers allowing it to place twice as many systems in a single cabinet. This attack centers on delivering the most horsepower possible in a given area while also reducing …
Crushed by Bubble 1.0
I would argue that RLX was affected more by the demise of the DotComs in 2001 than by bigger competitors coming in and stealing their customers with better, faster, cheaper products. The initial products from the tier one server makers sucked quite frankly and it took them several more iterations before they got it right. And, in that time the market had a chance to recover, which is when the blade revolution began in earnest (around 2006). But, by then all the products were aimed at the Enterprise market, leaving a vacuum at the low end of the market. Therefore the Web 2.0 properties have, for the most part, reverted back to buying the same white box servers that were the bain of the data center back in 1999.
As Ashlee knows, I have been saying for years that someone could/should build a modern day version of the original RLX System 324 product – using current technologies and new design – and be very successful. That time has finally arrived, I guess.
Ashlee - do your homework please
Yes, this is beyond the RLX / Verari, but there's nothing new here. IBM is second to market. My guess is that IBM has hired some south-east Asian ODM companies to build them a custom rack of servers, put the IBM badge on it and sell it.
Take a look at what Dell has been doing FOR ALMOST TWO YEARS with Data Center Solutions. You want high density servers for a web farm? You've got it. You want a HPCC? No problem. You want mega high capacity, low cost storage? Coming right up.
RE: Chris Hipp
I agree that Rackable got caught out by the downturn post-Y2K and the big vendors were more jackals feeding at the carcass than victorious predators, but I think the enterprise-class blades have started to look down-market too - with virtualisation, you can squeeze plenty of small servers onto one blade, which means many more server images in the one rack, and you don' have to buy all the RAID options, dual-disks, etc, if you don't need them. The big vendors tried low-cost and low-power blades like the HP BL10e and Dell 1655, they just didn't make enough money from them. Maybe IBM has got it right and now is a good time for a low-cost, small-rack system, but whilst iDataPlex does look like very intersting engineering, I'd still like to compare costs/prices for a pair of iDataPlex racks against blades with VMware.
Re: Ashlee - do your homework please
Do my homework? How dare you!
I drank my way out of college specifically to avoid that.
But even with my booze addled brain, I can see that this is not IBM ordering some SKU from Taiwan's finest. Living, breathing engineers were involved here. I've talked to them.
If you look at the case, you can see that they save on fans and power units at the very least. It's also half the size of anything Dell can offer, which, as mentioned, affords some unique data center rearrangement.
Dell's Cloud program does have the variety of motherboards and liquid cooling available, but I think you're kidding yourself if you think that equals this box.
Back to the Jack. Burp.
CTO Modular Systems Development
No we did not pay any ODM to design iDataPlex. I led the design team and we did this from the ground up. We have filed over 40 patents on iDataPlex...
Has it come down to measure the novelty of an idea or solution by the number of patents filed? This is pitiful...
That's a bit of a harsh comment! I applaud the bloke myself, for having some pride in his work.
Too many companies these days shift dubious, recycled, ten-a-penny* <Insert 'developing' nationality here> products; then we slam someone (admittedly from IBM) for the closest thing to craftsmanship I've seen on the market for some time.
Good job, Gregg! Go lick a HTC smartphone, Matt and 'trottel...
*five-a-cent, for our American friends.
RE: El G
Erm... I said it was interesting engineering, congrats to Mr McKnight on what must have been a challenging design brief, but what I said was I wasn't sure of the business model when compared to standard blades with virtualisation. And as to "Go lick a HTC smartphone", I've used those and prefer a Blackberry, thanks!
Besides, shouldn't we have the Sunshine crowd jumping up and down telling us T1/T2 are the green kings of webserving, not mini racks (actually, Sun do have a good point on the webserving bit)? ;)
And only forty patents? What, were the IBM patent trolls on holiday? Most new IBM kit seems to come out with at least 300 patents pending, etc! :)
RE: RE: El G
Sorry, may have been a little rash there. Just that there were a number of quite harsh comments being bandied around, and it was nice to see an engineer showing pride in their handy work.
@Matt - Sorry, kinda skimmed the comments and mixed you up with the AC above. Still an inexcusable snipe by moi, though :(
I have to agree that this looks like some interesting craftsmanship and some elegant engineering.
However, history is riddled with elegant engineering (Concorde).
I expect that IBM will sell quite a few of these. After all, loyal customers will buy their vendors' products, even if not optimal. Never discount the effectiveness of persuasion of their sales force too. And then there's always the outsourced, government and hosted business. I expect that IBM will find a way to plop a few in there.
HPCC doesn't count - most of those are good publicity but don't make any money for anyone.
But will it be more efficient than alternative designs? Will it offer a class leading measurement around power and cooling, versus bring a Supermicro double server rack full and popping it inside a self-enclosed Liebert rack?
If it's more expensive over 3,4,5 years for acquisition + operating costs versus a competitor's rack-dense or blades, why would anyone want it?
[coat, because if this thing is as cool as it's claimed, I may need to wrap up warm in the data centre]