Will cloud computing finally wear the mainframe market down like thousands of years of harsh weather? Anyone selling a server or a cloud against a mainframe will surely say "yes" to that question. Cloud computing is another style of utility computing, but it clusters cheapo servers together instead of using a large monolithic …
Sigh.... Dross survey with dross conclusions...
The stats in both surveys are skewed on the basis of people talking to mainframe shops in the first place. It is like the old segfault.org joke about object oriented programming where the cobol programmer asks "what is an object"?
Virtualised infrastructure (aka private cloud) will replace mainframes, however it will need to grow up quite a bit to that task. Just shipping a big box which can do virtualisation the way Cisco does in their California lineup does not a mainframe make. You have to have a database stack, transaction middleware stack, a workload stack, etc and all of that integrated to the virtualisation, load management, etc. You also have to have a lineup of suited "bioware" to support the customer through using it.
The sole player besides IBM and Unisys which is closest to having this out there today is SnOracle which funnily enough deplores the whole cloud idea. Everybody else is years away from there. No cloud offering comes even close and it will not come even close anytime during this upgrade cycle. Next or cycle after next upgrade cycles from now (5+ years at least) maybe, but not this one.
"Cloud computing is another style of utility computing.... "
" ..Cloud computing is another style of utility computing, but it clusters cheapo servers together instead of using a large monolithic machine "
Well THERE'S your problem, because that is not the definition of cloud computing, you have confused how it's often provided, with what it is.
What it is, is computing resources provided over the internet, usually with contractual conditions where it is provided with matching small increments of cost for small additions of extra use, for short periods of time. Now , I'm sure you could enhance or pick into that definition, but however its defined, the type of computing resources used to provide it should never be part of the definition.
For example, a cloud computing service could equally well be provided by shared use of a stonking great mainframe, as by many cheapo computers. Indeed, going back 40+ years, a mainframe timesharing setup fits the definition of cloud computing.
Even though main street is just now seeing the use of Virtual Machines / partitioning - these tools have been on mainframes since the 1970's.
Trust me, Intel and the likes have a LONG way to go before they get their products to compete.
Power consumption will probably become an increasing factor
Running a traditional mainframe workload on a mainframe uses a lot less power (and space) than doing the same thing on loads of blades. For people who have started measuring their computing needs by the acre, the price tag isn't so off-putting (if it helps negate the risk of a big "Green Tax Axe", clobbering them, somewhere down the line). That's probably an argument for assembling racks more like a mainframes, however, rather than preserving the 'mainframe approach' to computing. A lot of large scale computing, these days, isn't actually done with mainframes.
Mainframe shops are often outsourcing or managed services companies, though, in my experience. So any company saying they "don't use mainframes" - when the entire payroll and HR runs through one that their service partner operates - is a bit like someone withdrawing cash from an ATM and claiming they "have no use for computers". The mainframe approach still has its place, because there's an awful lot of that kind of computing, still to be done, and neither the nature nor he scale of that sort of work is about to change.
As for training the workforce; well - whether things go Cloud or stay Mainframe - anyone coming into the jobs market, with a background in desktop computing, only, will find their skills increasingly irrelevant to the needs of business (the status quo isn't an option). Maybe home programmers will have cut their teeth by renting space form Amazon, or maybe companies will just have to rediscover the concept of training people?
As I see it, the Mainframe's greatest threat is their relatively low turnover. You just don't buy a big Z and expect to be replacing it any time this decade. There comes a threshold, where the renewal cycle is too slow to maintain the supply. To maintain relevance, mainframes will have to become increasingly like Trigger's Broom - where the money comes from a continuing components upgrade, and the occasional sale of a great big box, for the bits to go in. To some extent it's already like that, but then, what is the physical difference between the mainframe, and a rack which has (as it must) adopted a mainframe-like approach to parts-assembly, power consumption and workload?
selling the cloud
Back in the early 90's I was trying to sell the benefits of Microsoft and Unix servers to Legal and General.
The IT manager listened to my pitch and agreed that it was the way to go, but it will never replace the mainframe.
Asking why not he said "Simple the mainframe heats the swimming pool".
They should be more proactive if they are worried about an ageing workforce. Half of the team I work with in our mainframe shop are under 30. The company has made an effort to train in new staff and retain skills. Any company with a problem that comes down to an ageing skillbase has only themsevles to blame
"And somehow, thanks to the vast fortunes that mainframe shops have spent on databases and applications, the mainframes have persisted at thousands of companies."
That is the reason people stay with Mainframes.
There are VERY few work loads where you MUST have a Mainframe. I dont know of any, actually. Performance wise; the Mainframe cpus are 5x slower than a modern x86 (which I have shown earlier with links) so they suck badly. Reliability wise; large Unix/OpenVMS machines and clusters rival that. Price: Unix/OpenVMS are far cheaper.
It's not that simple.
"There are VERY few work loads where you MUST have a Mainframe."
I don't think there is such a thing as "must". But Again having been involved in many 'porting off legacy platform X' projects, I would say that things aren't as simple as you try to make them look.
Now a mainframe running CICS with PL/1/Cobol exits which might even have a little assembler thrown in for fun, which have been running for 20 years, is not just something you replace like this *SNAP*.
I've seen projects where people have tried to port off "mainframes" (of various types), and where they ended up with 5% of the expected throughput at 5-10 times the response time. Why ?
1) Cause there is a big difference in running interpreted java code and natively complied code.
2) 'Mainframes' might be slowish on certain tasks, but it's not always that part that counts.
3) Don't underestimate 10+ years of tuning and customisation.
4) Often hardware vendors trying to replace a competitor, kind of use benchmark values for their own solution stack and worst case for the opposition.
5) Software stacks tend to bloat up and become slower, as software companies cut cost.
6) Usually the solution that you have to migrate to is some scale out solution that isn't nearly as efficient as a centralized solution.
So be very careful of migrating of 'mainframes', and by mainframes I am not only talking about IBM zSeries, but there are also a lot of other platforms in that category.
To some Wintel guys UNIX machines fall in the same category :)=
I do not claim
I am saying that Mainframe CPUs are 5x slower than a modern x86 cpu. You pay for Mainframe RAS, not for CPU performance.
...you claim to much with to little data and definitions to back it up
With CPU's you do you mean Chips ? And what kind of Chips ? Or are you talking about Processors Or cores... You happily refer to Magny Cours as a processor.. by that term the z10 processor holds a whopping 20 cores.
And if 20 cores are are 5 times slower than for example a 6 core Westmere-EP, which must be the most modern x86 CPU there is on the marked right now, then a Mainframe core is 15 times+ slower than an x86 core.
- Product Round-up Smartwatch face off: Pebble, MetaWatch and new hi-tech timepieces
- Geek's Guide to Britain BT Tower is just a relic? Wrong: It relays 18,000hrs of telly daily
- Geek's Guide to Britain The bunker at the end of the world - in Essex
- Review: Sony Xperia SP
- FLABBER-JASTED: It's 'jif', NOT '.gif', says man who should know