Wondering how valid are the issues brought up here....
blogs.sun.com slash jsavit slash entry slash once_again_mainframe_linux_vs
IBM loves to put a new spin on the mainframe to keep the legacy platform looking fresh and the doomsayers at bay. And now-a-days nothing makes a technology look brand-new than a fresh coat of green. In the first move since announcing the $1bn Project Big Green initiative in May, IBM plans to make good on its eco-friendly …
Looks like this is one coffin that just can't be nailed down - I've heard the (client/server) vs. mainframe debate for the last 15 years and seen enough consolidate-distribute cycles to know that they're not going to disappear any time soon. Dunno why people can't just accept that they have their place.
IBM is not using it's wonderful (and apparently green) ultra high density blade systems for this consolidation??? After all they have lots of processors in a really small box and you can virtualise your machines on those.
Every day another vendor tells us that blades are the solution to all our problems, so long as those problems are how to get the air to plasma temperatures in our data centres. Of course this power density is our excuse to buy the next 'green' solution and deliver in rack water cooling because that is the only way to stop the blades frying. Best not forget the few million for that really expensive box for your disks to live in either, or the floor reinforcements to hold it all up.
A cynic might wonder if this all started when somebody in the IBM canteen said "What are we going to do with all those mainframes nobody wants? It is going to cost us a fortune to dump them now that WEE is in force"
Seems to me that if the old machines are just going to be put into service elsewhere that IBM is really ADDING to global warming with this move...
Of course, if the machines are being dismantled and the components recycled, then good on them...:)
i live a couple of towns away from where IBM got their start (Endicott, NY), and those guys don't exactly have much of a green rep around here.
in 1979, 4,100 gallons of methyl chloroform were spilled by IBM. While investigating that, a large plume of trichloroethene, tetrachloroethene, dichloroethane, dichloroethene, methylene chloride, vinyl chloride, and freon 113 was discovered in the groundwater. Later on benzene, toluene and xylene were also found.
In 2002 the NY department of environmental conservation forced them to reinvestigate, at which time TCE among other contaminants were found in the indoor air of buildings around the IBM complex.
To hear them now tout their "green credentials" sounds just a bit ludicrous to me.
What is IBM on? “In addition to cutting the sheer amounts of units to power, IBM said moving to mainframes will also reduce the cost of software acquisition. Software is often priced on a per-processor basis.” Umm, IBM charges per core for most of their software and if you want to use the Power6 you will have to pay more for the software. “Power6 server software will require 120 Processor Value Units (PVUs) per core.” As can be read here:
Way to go IBM.
I have now some large SGI boxes eating up floor space in a small server room. They are up to 10 years old and they won't die. I want them to die. Die! Die! Die SGI boxes! Then I can replace them with inefficient heat generating 1U x86_64 machines and help destroy the planet. Destroy! Destroy! Destroy!
In a past job the IT system ran on an IBM main frame that was perhaps 30 years old. The disk platters were 2 feet in diameter and the disk cases were cast steel tubes with glass sides! For some incomprehensible reason when that mainframe *was* replaced, IBM insisted on the return of the disks. I wanted one to use as the pedestal for a small coffee table.
Seem to me stability is less of an issue with mainframes and minicomputers that PC systems.
there will be 30 mainframes spread over 6 datacenters, so 5 towers per datacenter on average. probably some fail-over built into that, and the z mainframe is very redundant, self-diagnosing, and just about the top of the IBM line for all the components.
IBM may even have some spares on hand, and maybe a technician or two, to replace anything that the z flags as a potential problem, well before the component breaks and causes a crisis. please note that a zSeries doesn't just break like commodity servers do. if properly maintained, it is very unlikely to fail, except maybe if hit by a car, or submerged, or something equally drastic.
i doubt they would charge themselves for their own software, but even for a customer, replacing 3900 servers having at least 1 core per server (usually 2 to 8 cores per server, these days), with 30 servers having several cores per server, makes financial and administrative sense, not to mention volume discounts, leasing and financing options, 24-hour technical support (the real stuff, not script readers), training and documentation, management and monitoring software, fail-over and replication assistance, and lots of other goodies that IBM would throw into a deal this size (z mainframes are NOT cheap).
IBM possibly knows something about efficient ways to virtualize workloads on mainframes. they will likely share any such knowledge with customers who are willing to buy one or more zSeries boxes.
i would say they're thinking much more clearly than people throwing money at hot, proprietary blades, backed by hot, proprietary storage arrays, connected by hot, proprietary storage switching gear and hot, proprietary gigabit switch/router fabric. add licensing and administrative overhead, and i'd say the zSeries starts to look quite attractive. the z also scales rather well, is upgradeable, and has a potential useful life of up to 20 years (maybe more).
if you're going to go proprietary, at least get the stuff to make life easier.
People have forgotten how mainframes work. The vast majority of today's PC users are most probably ignorant of what mainframes even are.
A PC is motley collection of basic hardware components. Its advantage is that it is easy to make components for it, which is why the hardware industry is wallowing in innovation. PCs are used for single tasks most of the time, and the user can generally not be bothered by knowing what is actually going on in the thing.
A mainframe is a delicate, precise assemblage of components tailored specifically to suit much more demanding hardware specifications. Mainframes can warn you in advance if just about any one of their components is going to fail - so you can replace it before it dies. Mainframes generally allow you to unload and switch off the area you do maintenance in, so that you do not need to shut down the mainframe to replace a component (ever heard of hot-swappable ?). With RAID, PCs have come to learn what hot-swappable is in the disc arena (but not users). Mainframes can do that with just about anything but the backplane (a motherboard, in other words).
In other words, I'm sorry to destroy your illusions but when the 3900 PCs are turned off, the few mainframes will easily take up the position and you're going to wait a long, loooong time before getting a day off because one of the mainframes went down. If it is competently managed, you might never see it be down during your entire career.
The rest of you need to find out more about what a mainframe is these days. They never go down without lots of advance warning, and even then the MTBF is measured in decades. We've never had a hardware outage on ours that we didn't want (we only have one a year for microcode upgrades - all other upgrades have been "non-disruptive").
The economics of them speaks for itself. About the only reason they aren't taking over the world's datacentres is politics, no surprise there.
How do 3900 machines use the equivalent of a small town's worth of power?
I mean, seriously? A small town might have, what, 5000 houses? That's not even one server per house. And I'm pretty sure my house uses more than one server's worth of electricity, even taking air-con into account.
Unless they're defining a small town as one with about 10 houses, of course.
"How fun to instantly have 3,900 servers go down when the mainframe breaks down! Day off everybody..."
Errr, you don't seem to get what a Mainframe is about, do you?
Your System Z Fridge automatically dials support when a component fails (and a spare has taken up its job).
Then 1 or 2 days later you get a call from reception saying "IBM bloke arrived" and you're like "duh, we were waiting for an IBM bloke?"
You never even *knew* one of the components failed because the Mainframe stayed up, as it should.
Then the technician changes the component, be it a processor, a card or whatever, without powering the machine down.
Ever tried that on, I don't know, any other system at all?
Hot swappable CPUs? Not anytime soon on other machines...
How large is a small town? 5houses? 500? I don't know.
This is a ridiculous unit, just like the `football pitch' (a variable sized thing even looking at FIFA football, plus NFL and other rulesets have different ones. And hating teamsports, I have no real feel for its size).
Just gimme energy consumption in kilo- or megawatt, in dollars (at domestic rates), or in kilos of potatoes fried with a family fryer [I'm Belgian, it's an intuitive unit].
...what is the difference? If you can put your code running on both, both deliver the same computing power, both are pretty fail-safe (redundancy, hot-swap and such), both can be upgraded...
.... but one of them needs less juice to run...
...WHY IN HELL would anybody choose blade architecture in the first place, if the fridges deliver more power for less kilowatts? Do you have any advantage to the blades, that the Behemoths can't copy?
Isn't it an retail/wholesale advantage is it?
Isn't the advantage of the blades that you can sell each chunk to a different tight-wallet buyer? If you are expanding, you think you are saving money because you are buying few U's at a time, instead of a CHUNKY CRATE at once... sigh..
As the BOFH says, it is a win-win situation... for IBM of course...
It took IBM 40 years to figure that out? I think NOT. They are just telling you that you've been duped with a juice-guzzling architecture, because YOU wanted it, but the LARGE BOXES were better and greener all along.
3900 / 30 = 130
Can one box deliver as much as 130 servers? For less carbon?
let's rememeber that while houses sleep, too, servers don't. Say one server uses 400Watt for 20 hours a day (4 hour downtime a day should be enough of a buffer) that's 8kW a day. Plus the energy required to cool it, that's about 40% of power usage, it's 11,2kW per day. Now if your house used that, you'd be paying 11,2kW *0,1$ (cost of one kW/hour for datacenters, you might not get such a discount) = 4100$ a year in electricity. I sincerely hope, for the good of our planet, that you don't use that much...¤
Good points, but I contest the "because YOU wanted it, but the LARGE BOXES" part.
Twenty+ years ago if one wanted to use computers in a small or medium size business the mainframes (and minicomputers) were too expensive. God help you if you wanted a real operating system on your home computer.
In 1990 I learned a lot about UNIX (Sun OS) at Uni, saw it was much better than DOS/Win3.x. I called up Sun and asked what a workstation would cost me. The answer was from $12000Cdn up, depending on the features I wanted!
I plumped for Slackware on a 486 instead. It wasn't the most full featured OS, but it did the job. The machine - a Dell - cost me about $2000.00Cdn.
There was a solid monetary reason why mainframes and workstations got overtaken by PCs.
You just proved my point.
No money to buy your spot under the Sun, you settled for a Slackware / 486 combo. I did that too back then, what choice did we have? We couldn't afford the efficient stuff, because they came in a very large package, so we bought the INEFFICIENT stuff, but in a size that fit our wallets, and a second mortgage...
The large companies discovered that the inefficient stuff sold so well, it became dominant just because it was cheap. It didn't mean it was good.
AND WE HAD TO PUT UP WITH THAT, nothing was that cheap. The reason is monetary and monetary ALONE, I give you that!
It became so blatant obvious it was inefficient with the Intel PresHott Pentium 4's, because it was beginning to get real hard to cool it with air, lots of people said "I bought a space-heater from Dell!"
When I say "you wanted it" I meant, IBM saying: "you asked for cheap stuff, since you bought my whole stash of crap, I will give you cheap stuff".... sorry about that... it didn't come out clear... In other words, IBM and others started selling truck-loads of cheap (hence inefficient) stuff because people were buying it.
2-stroke motorcycles are highly aggressive to environment, yet they sell a lot because they are cheap.
We bought the crap, so we asked for it. Economics 101.
Excellent remarks, though. Up to this day a decent Workstation is worth more than many cars.
My point remains valid, if you have the cash up-front, go for the family-pack, it costs less cash per VUP in the long run.
It is a Price/Performance rating, that's it. Tom's Hardware show a pretty long one for x86 processors, for instance, but that is only for purchase price. When the power consumption comes into play (you can do your own math based on those charts) some results could come up, I don't know...
You must admit, the whole PC architecture is build to be cheap, since it is a no-redundancy version of a server or mainframe. Nowadays PC users are going for hard-drive redundancy..., but that's it. What about PSU redundancy? Hot swapping? Your PC SATA drive might survive a hot-swapping, but most people don't recommend even trying. eSATA connectors show promise, I haven't seen one live. USB connectors, even, are promising. What about ECC memory? Your refrigerator kicks in (at home), it jolts your power supply just enough to change a 0 into a 1 in your RAM, you won't even know. But I digress...
The point is, you have to choose CAN AFFORD vs. GOOD STUFF.
Good stuff cost dosh, LOTS of it. It can't be helped. We both know.
Biting the hand that feeds IT © 1998–2019