The roll-out of the Power7-based rack, blade, and tower servers finishes up today with the debut of five Power Systems machines. Big Blue is launching four itty bitty boxes and one behemoth. Now we get to find out just how much pent-up demand there is - or isn't, as the case may be - for IBM's entry and high-end Power-based …
Very Cool and I like the green stripe
I have always been a fan of scale up systems which because of the higher utilization levels and I/O capability are actually the greenest of the bunch.
Blades are just a form factor reduction vs. the best answer.
Reminds me of the IBM commercial about someone stealing all the computers and the young dude saying we put them all into the box in the back of the room.
RE: Very Cool and I like the green stripe
"....Blades are just a form factor reduction vs. the best answer....." So, you can have blades, that hold CPUs and memory, and slide into a chassis that gives access to shared power and IO, or you can have Powerbooks, that hold CPUs and memory, and that slide into a chassis that has shared power and IO.... Yeah, you really convinced me there, Ms Park - not! Don't let such advantages as Virtual Connect, embedded switches and optimised PSU management get in the way of your non-argument. Want to try arguing that with the modular P5x0s instead, it would be even funnier?
Nice to se the article confirms what I suspected so long ago (http://forums.theregister.co.uk/forum/1/2010/02/09/intel_tukwila_feeds_speeds/), that IBM will be unable to virtualise the whole of the P795 from launch. But it is nice to see Ms Park has actually managed a post not laden with the usual cut'n'paste FUD.
unable to virtualise?
>that IBM will be unable to virtualise the whole of the P795 from launch.
Matt, what r u talking about? The whole 795 can be split into 254 systems. Isn't that virtualization? It's 254 systems built on one whole 795. Show me a customer that needs more systems on single 795 first.
Can you please do what you do best and go peddle some printers elsewhere. The grownups are talking.
RE: unable to virtualise?
The original argument as sprouted by Ms Park was that IBM's virtualisation offering was more flexible and more granular than what would be on the new Superdome2, but the reality is that the level of granularity on the p795 will be poor in comparison. Whilst hp-ux will be able to offer sub-CPU level of granularity, AIX on the p795 will only be able to offer the same on a chunk of the p795, the rest will have to be larger instances whether you want them or not, or you will have to accept virtualisation with at best the CPU being the smallest unit of virtualisation spread equally across your p795. Not a great consolidation argument, but if you want to argue it I suggest you first ask Ms Park why she was raised the point in the first place.
".....Show me a customer that needs more systems on single 795 first." Well, you can do your own prospecting, but you may want to go talk to those customers with older P4 and P5 rack units that don't want to consolidate onto the lower-spec 3GHz Power7s in the latest AIX blades. Not much of a consolidation option when the smallest you can split your whole p795 down equally into is an eight-way CPU (assuming you only lose two complete CPUs to the PowerVM Hypervisor partition). It doesn't matter how many CPU pools you create, the top limit is still 254 instances. With hp-ux and IVM on the new Superdome2s I will be able to split it into npars and then IVMs right down to sub-CPU level across the whole system, which is far more granular. So I won't have to stack multiple app instances on each OS instance as I would have to on the p795, which means I won't spend half my working life organising co-ordinated downtime so I can patch instances or upgrade apps. With IVM on Integrity I'll be able to have each app stack on a separate OS instance, which means I can boot, stop, patch or even remove an app stack without affecting the other VMs.
And then we get to the "future AIX upgrade" which will allow AIX to scale to 1000 VMs, still worse than one per-core spread equally. Not much chance of IBM's virtualisation capability matching hp's soon, then.
RE: Little Matty...
"Can you please do what you do best...." Sorry, you're not my type.
".....go peddle some printers elsewhere...." Well, seeing as how hp are dominating the printer space, I almost wish I was selling hp printers, but then I suspect the margin is probably rather thin. I think I'll stay exactly where I am, thanks, which is pointing and laughing at people like you.
"....The grownups are talking." So I assume they let you in to watch and learn? I note that your "grownup talking" has zero technical or business content, want to try again? You can start by trying to explain how you think the IBM virtualisation offering for the p795 will compare when capped at 254 instances, a less granular offering than either Oracle's or hp's UNIX virtualisation? Or am I asking a bit much of your limited technical abilities? Then maybe you'd like start with something a little easier and post an original opinion on the new P7 servers or AIX 7.1? But I'm guessing that really would be waaaaay outside your technical limits.
Welcome to the POWER world.
Eh.. you still have no clue what so ever on how the Hypervisor works.. No clue what so ever.
"Not much of a consolidation option when the smallest you can split your whole p795 down equally into is an eight-way CPU (assuming you only lose two complete CPUs to the PowerVM Hypervisor partition). "
There is no POWERVM Hypervisor partition... it's not IVM you know.
Well you are using two levels of virtualization npars and IVM. No problem with me. But lets do the same on POWER. Lets make a 0,1 CPU virtual machine. And inside that we can then make up to 8192 Work Load Partitions. With a CPU granularity of 1/65535 of the Virtual machines processor allocation.
So actually we can have 8192x254= 2M+ virtual partitions. with a minimum CPU granularity of less that half a millionth of a CPU.
And we can move the Virtual machines from one physical machine to another. Or we can just move the Wpar if we want. All done while the application and virtual machines are running.
That's the problem with being behind in the race, you cannot see what is going on in the front of the race.
RE: Welcome to the POWER world
"......it's not IVM you know...." There's no separate management partition in IVM, you just share out all the resources in the hardware partition. And when we slice up Power servers we do not get 100% of the CPU power available to the instances, so please tell me where it's going if not in the virtualisation?
".....And inside that we can then make up to 8192 Work Load Partitions....." Oh, you mean 8192 shared in one OS instance - nice isolation of any software fault there! So, one memory error and you lose 8192 applications at once - great design! No wonder EDS is beating you. They probably clap and cheer whenever their customers mention your name!
"....with a minimum CPU granularity of less that half a millionth of a CPU...." Nope, 'cos your WLP shares the OS with all the other WLPs, i.e. one OS error can kill all your WLP on that instance, so your granularity is the size of the owning OS instance. Try again!
".......And we can move the Virtual machines from one physical machine to another....." <Yawn> Yeah, live mobility - whoopeedo! So can just about everyone else - hp-ux IVM, VMware, Xen. Try updating your feature-sell, it's waaaaaaaay out-of-date.
".....That's the problem with being behind in the race....." Well, seeing as Pseries partitioning has just about caught up with where the Integrity range were eight years ago (and still hasn't matched Integrity on true hardware partitioning), I'd say you're the one with the problem. And then hp's new Integrity designs fit into those hp blade chassis that have been caning the IBM blades for years - when will IBM catch up with hp and offer the advantages of embedded switches and tools like Virtual Connect for anything above the bottom end of the pSeries range? Oh, and if they ever get to the point where they can, do you think they'll be able to do it with a better CPU than the crippled one they had to put in the P7 blades becasue they can't make the current blade chassis handle the cooling and power required for the real P7 chips?
Der er en nr 2 men der er langt derned det er mit kølvand der køler dem ned
"And when we slice up Power servers we do not get 100% of the CPU power available to the instances, so please tell me where it's going if not in the virtualisation?"
You still don't get it do you. Let me try with a little picture/example.
We have an older machine. It could be a E25K,SD or a p690, it really doesn't matter. On all machines we can statically carve the (or parts of) machines up into 8 chunks of lets say 4 Processor cores.
We then have 8 applications that happily crunch along inside each little partition with their normal lousy lets say 20% average utilization.
Now we replace the machines with a SD-2 with one 32 core IVM npar with 8 guests each with 4 processors, and a p780 with 8 virtual machines each with 4 virtual processors. But hey why not exploid the fact that we can overcommit the machine. On IVM we quickly follow HP best practice (or so my HPUX guys calls it and they might be wrong) and do a 50% overcommitment adding 4 more 4 processor core guests, raising the average utilization of the machine to 30%.
On the power 780 server I quickly add 10 more virtual machines with 4 virtual cores (that is my standard overcommitment factor for that machine) each raising the utilization to 44%.
And there was much rejoice in both the HPUX group and AIX group, and they all went to drink the Wintel guys ,under the table, cause that is what they did on a friday afternoon.
And you ask what is the overhead ? ehh.. it's huge in both cases.. negative overhead that is, as I get much more work done.
Is there penalty ? Sure there is, just as there is a kernel penalty related to running more than one process on a multi processing kernel. But hey you do sound like that punch card Mainframe guy from the 60'es that insist on running a single task on a single machine. Wake up dude.
"Oh, you mean 8192 shared in one OS instance - nice isolation of any software fault there! So, one memory error and you lose 8192 applications at once - great design! "
"Memory errors have always been a problem on HP Unix machines", one of my friends who used to be work at HP's support org. I don't agree. But since you keep talking about it, then perhaps there is something about it?
WPARS is pretty good isolation, sure it's not OS software stack isolation. But is pretty good isolation. The isolation stack we normally work with is like this:
Same os, rsets isolation, Wpars, Virtual machine, Physical machine. The longer down the isolation road the better the isolation but the price also goes up.
And still HPVM is a HPUX instance with Guest running inside it. Kind of like VMware in the old days right ?
And overhead lets see... on a 2TB power 780 the memory overhead will be .. 41-77 GB for a fully loaded machine. The later number with VIO and all, and all partitions being able to grow to a factor x2 in Memory capacity (max_mem=2xdes_mem).
For a SD-2 with 2TB inside one IVM with Max memory used the memory overhead is ... 321 GB Wooohh.. man I understand why you want to talk about overhead. First 8% over head then 8,3% again.. man.. sure is a good solution. So a factor of 4-8 in overhead... sure.. IVM rulez. *CACKLE*
"Well, seeing as Pseries partitioning has just about caught up with where the Integrity range were eight years ago (and still hasn't matched Integrity on true hardware partitioning), "
No they haven't caught up with the overhead thing. And you don't get it.. we don't want hardware partitioning. We have no need for it.. it's a waste of resources. Why do we want to carve your server up into what could just as well be cheaper machines ?
"And then hp's new Integrity designs fit into those hp blade chassis that have been caning the IBM blades for years"
Yeeesss.. lets order a highend server that uses the same components as the cheapest blade system around.. Yeah right.. *cough* *cough* hopefully customers aren't that stupid.
"when will IBM catch up with hp and offer the advantages of embedded switches and tools like Virtual Connect for anything above the bottom end of the pSeries range?"
Eh, an embedded switch ? What for ? I use virtual (not to be read vlans) networks inside the machines, You know LAN in a CAN, style. If I want to go outside I'll use a SEA adapter, (software virtual switch) or a HEA (hardware virtual switch).
"No I don't want to use a punch card reader.. I have a removable hard drive".. "What are hard drives not secure cause you cannot read the bits manually"......
"do you think they'll be able to do it with a better CPU than the crippled one they had to put in the P7 blades becasue they can't make the current blade chassis handle the cooling and power required for the real P7 chips?"
It's like shooting a fish in a barrel.. at point plank, with a shotgun.
IBM PS702 2 sockets and 16 cores takes up 2 slots and does 520 specINTrate2006.
HP i860c i2 2 sockets and 8 cores taks up 1 slot and does 134 specINTrate2006
HP i870c i2 4 sockets and 16 cores taks up 2 slot and does 269 specINTrate2006
HP i890c i2 8 sockets and 32 cores taks up 4 slot and does 531 specINTrate2006
Yeah.. the i890c i2 wins ! But wait... you can only have 2 of those in a 10U c7000 chasis, and you can have 7 PS702's in a 9U Bladecenter H. That is a compute density of 106 specintrate2006 per U for the i890 i2 versus 404 per U for the PS702... ohh.. ohh... ...
Power usage then.. HP is good at that. Lets see i890 i2 uses ... 3184 Watt max power.. the PS702 only 700 watt.. ARGH.. what about the i870 i2 then.. 1592 Watt ? What about the i860 i2 then 796 Watt ? How can this be ?
Price then .. HP blade products are cheap !!!!!
Yeah.. PS702 with 16 cores AIX and 32 GB RAM and 2 disks is 196K Dkkr. Woo that is expensive..
lets see..hmm there it is i890 i2 with HPUX 32GB ram and 2 disks and 32 1.73 cores is 809K Dkkr, WHAT wait lets take cheaper cores... 1.33GHz that gotta be cheaper. .. what 527K Dkkr ?... basically you need the i860 i2 with 8 cores to beat the PS702 price with 30K Dkkr. But that is 2 Tukwilas versus 2 POWER7's and we all know who the faster there.
Although you'll just cook up some witch brew about benchmarks and and to cloud the issue.
Again, when your competition is so much in front of you that you can't see what is going on...
// Jesper says have a nice weekend
You should check out the spanking Leisure Suit Larry just got today
Larry has been spending millions on ads talking about the TPC-C benchmark from last October.
Check out how Power7 compares
4 Systems vs. 12 systems
10.36M transactions vs. 7.65M transactions
$1.38/transaction vs. $2.36/transaction
224 flash drives vs. 4,800 flash drives
$11.5M Full DB2 license vs. $7.85M 3 year only license (like anyone buys 3 year...IBM should create one of those BS 3 year term things as the $/trans would be even lower)
I wonder what the next ad will be from Oracle.
35% less transactions
71% more expensive
Oracle SPANC is not dead...please keep paying our maintenance fees.....Larry needs a new America's cup yacht.
> 4 Systems vs. 12 systems
It's 3 Systems vs. 12 systems...
Matt - 795 LPARs
Matt - where do you get this stuff? The 795 docs I've looked at show 256 cores in a fully loaded unit with current support for 254 LPARs, which has been IBM's limit for a while now on the high end gear. 256 cores divided by 254 LPAR limit says you could conceivably have 253 1-way LPARs and 3-way. You can have a few big LPARs and lots of little ones - I don't see anything that changes PowerVM granularity. FYI - IBM annoucement literature also suggests a 1000 LPAR limit is coming and that would suggest a firmware upgrade and perhaps an HMC patch or two. AIX need only be more current to support a single image with 1000+ threads on the 795 behemoth.
Two opinions from me...one, that's a lot more flexible and easier than Stupidome npar and vpar slicing and dicing and two, if you buy any 256 core machine for nothing but < 1-way LPARs, you have more money than brains.
RE: Matt - 795 LPARs
"Matt - where do you get this stuff?...." All out of the Reg article, the IBM Red Books or of the IBMer posts.
"....The 795 docs I've looked at show 256 cores in a fully loaded unit with current support for 254 LPARs...." Congratulations for keeping only two steps behind the conversation, that's exactly what everyone else has already said.
"....256 cores divided by 254 LPAR limit says you could conceivably have 253 1-way LPARs and 3-way....." You forgot the hypervisor takes some CPUs. But your best unit of granularity is a core at best. With hp-ux I can go right down to sub-core granularity using npars, IVM and PRM. Sub-core would seem a lot more granular than core-at-best, which means Allison's insistance that IBM virtualisation/partitioning was more granular than hp-ux's was just complete male bovine manure.
"....one, that's a lot more flexible and easier than Stupidome npar and vpar slicing and dicing...." Actually, npar, vpar, PRM and IVM are pretty simple to use. I have had a sysadmin creating VMs in hp-ux after an hour's training, and that's including teaching him how to use CommandView to create the boot volumes on an EVA, SAN tech he'd also not touched before. Can I suggest it's because - like most IBMers - you haven't touched an Integrity system or hp-ux, just swallowed the IBM FUD?
".....if you buy any 256 core machine for nothing but < 1-way LPARs, you have more money than brains." If you want to argue rationality with Ms Park, the IBMer that made the laughable suggestion in the first place, then you're already on a losing streak. Of course, I'm betting you kept schtum about reality, money and brains whenever IBM release details about their ludicrous benchmark setups. Like when a complete p595 is turned off so one CPU can use all the cache from the other 31 CPUs, for example. Or short-stroked disks using a tiny fraction of their capacity just to get throughput by spindle-count. When are you IBMers going to learn about people in glass houses and throwing stones?
RE: Matt - 795 LPARs - Part Duh!
".....you have more money than brains." Actually, I can think of several such bizarre instances, especially one which was very much fuelled by the more-money-than-competent-management factor. To protect the guilty (and stop me being sued!), we'll call the company involved Bob Co (apologies to any existing Bob Corporation, Bob Limited or any other company sharing the Bob moniker, any similarity in name is completely coincidental and no implication should be made about your management's capabilities). In Bob Co's defence, the majority of the bad management involved in this story has since been shown the door.
Bob Co had a nasty habit of running many, many, many projects in parallel, often poorly co-ordinated and with significant overlaps that kept us techies running around like the blue-arsed proverbials just trying to keep tabs on them all. In fact, Bob Co had more projects than project managers (let alone GOOD project managers), and made the cardinal sin of employing contract PMs that couldn't be told what was happening on some of the other overlapping projects due to confidentiality. Mix all this up with a company that was running at breakneck speed and you can imagine some of the fun!
Anyway, for one project we needed a secure and load-balanced web front-end. Our standard web kit at the time was Slowaris on Netras, but hp made a silly-priced offer on ickle A500s (2-socket PA-RISC hp-ux servers), mainly because they wanted to get a completely hp solution in place to upset Sun. So we ended up with several racks of these A500s.
Several years down the line, we finally got round to looking at replacing the A500s. Cue six months of planning, POCs and negotiating. At this point, just when we're ready to cut an order, someone in Bob Co's purchasing department remembered they had bought two Superdomes for a project, cancelled the project, and simply forgotten to return the Superdomes (I kid you not, they even renewed the annual support when the SDs were sitting in boxes in a warehouse!). Yes, you guessed it - we ended up patitioning up the SDs to replace all those ickle A500s!
So, it really does go to show that, where there is more money than (good project management) brains, there will always be expensive kit utilised in surprising ways!
'"Matt - where do you get this stuff?...." All out of the Reg article, the IBM Red Books or of the IBMer posts.'
'Can I suggest it's because - like most IBMers - you haven't touched an Integrity system or hp-ux, just swallowed the IBM FUD?'
Now isn't that ironic - you have supposedly read the books and don't own the hardware. That explains a lot. You've never sat in front of an HMC and sliced up an LPAR before have you?
The hypervisor does not take up any CPUs. Care to point out where you got that gem from? That is simply incorrect and I know because I've worked with PowerVM for years. Someone else I work with used to look after the HP boxes that are now recycled materials - that last bit speaks to the market - on that note IDC just put out the 2Q report and I'm betting El Reg will write up something on the UNIX numbers at some point so we can read them for free. No doubt more HP boxes were put out for recycling...
As for my comment re more money than brains - I'm suggesting you can buy a bunch of 2 or 4 socket machines for a lot less spend than a behemoth like Superdome or a 795. I you don't have any bigger LPARs there's no real benefit to a big expensive scalable machine. Bob Co could have gotten away with mid-range N-class or newer 8xxx boxes just as well I'm guessing. There's headroom and then there's HEADROOM.
As for PowerVM - you don't know what you are talking about, let's just leave it at that.
RE: Still Incorrect.
".....you have supposedly read the books and don't own the hardware. That explains a lot....." No, Greg, that because I've learned from previous IBMer posters that it's best to show them a quoteable source as they'll blindly deny anything that upsets them. It's always fun watching them get in a lather when the Red Books poke a big hole in their arguments!
".....The hypervisor does not take up any CPUs....." Interesting. So, when we slice up our pSeries and we don't get 100% CPU available, where do you say the missing processing capability is going? Does IBM's virtualisation tech have a tea-break setting that gives those over-hot CPUs some time off? Are you saying that tea-break setting is not going to be in AIX 7 and you can guarantee 100% CPU power will be available? But, ignoring that, you seem to have shot yourself in the foot - if all 256 CPU cores are 100% available, but the maximum number of LPARs is 254, then your level of granularity is actually not even down to a single core! You're not doing much to help Ms Park out here.
".....Someone else I work with used to look after the HP boxes that are now recycled materials - that last bit speaks to the market...." All the vendors' servers get recycled or renewed. Your implication that only hp servers might be traded in is misleading, verging on deceptive. I can ring up hp right now and get a trade in deal on our pSeries, SPARC or Dell kit, and the guess where that kit would go! - yes, it would be renewed or recycled. Same goes for a call to Oracle, Dell or IBM. If your best argument is nothing more than smoke and mirrors I suggest you leave the conversation now before you make yourself look any more silly.
"....I'm suggesting you can buy a bunch of 2 or 4 socket machines for a lot less spend than a behemoth like Superdome or a 795...." I'm suggesting you IBMers need to get together and rewrite your FUD, as Ms Park made the suggestion in the first place and I'm just taking fun in pointing out how wrong she was. Then again, maybe if Ms Park and co just gave up on the FUD in the first place we'd all be a lot happier, and you IBMers wouldn't be looking like the new Sunshiners.
"....As for PowerVM - you don't know what you are talking about...." You must be a real PowerVM genius, then. But it's strange then that you haven't been able to support Ms Park's ludicrous assertion that AIX partitioning is somehow superior to IVM. In fact, you've just inderlined the fact that IBM's partitioning is still playing catch-up. Good job!
1 engine enough?
I do not know where the author has his head but one engine on an IBM mainframe is not nearly enough. Two is minimum three or more is best. Its been that way for over 10 years. There is a lot of processing done under the covers, especially with the Work load manager. The reason IBM can run at 100 percent cpu utilization is that IBM optimizes *EVERYTHING* whether it is I/O or Paging or just work. On the surface the statement sound right but anyone who has really worked (and knows) how IBM's code is so well thought out and optimized and how separate components work together to get the most bang for the buck, it makes me wonder how good the writers really are for the register. Most likely they are IBM bashers as they obviously do not know how IBM's OS(MVS) really works.
Still Don't Get It
You obviously don't understand PowerVM...
"if all 256 CPU cores are 100% available, but the maximum number of LPARs is 254, then your level of granularity is actually not even down to a single core!"
The LPAR limit is 254. I could create one LPAR at an entitled capacity of 0.1 cores. I now have 253 LPARs in my (current) limit remaining that can be anywhere from 0.1 cores to 255.9 cores in size. Obviously if I kept doing this I would leave idle capacity behind after creating 254 x 0.1 core LPARs. But again, nobody is going to buy a 795 to do that. You need to seperate the # of LPARs limit from granularity of entitled capacity. You don't allocated cores to the hypervisor. The LPARs themselves make hypervisor calls as needed and those cycles are taken up by the running LPAR.
Of course you can recycle hardware - my tongue in cheek comment was meant to reflect the steady decline of revenue share enjoyed by the Itanium/HP-UX combination which, as El Reg has pointed out via free insight to IDC share data, is a number trending toward 0.
As for better and more flexible - what is the maximum Intergrity VM partition size? Still 8-way 64GB? PowerVM has done better than that since day 1. How many changes require a VM reboot for you? I can dynamically change my capacity assignments for CPU and memory, I can add I/O capacity and storage space, I can even mix and match virtualized I/O with physical PCIe slots if so desire.