Intel has come a long way in the server racket, and the new "Westmere-EX" Xeon E7 processor, launched in April and making its way into systems now, is arguably its most sophisticated processor for servers to date. The Xeon E7 processors cram ten cores onto a single die, but the Xeon E7 design is a bit more than taking an eight- …
Missing the point...
The Xeon E7 is a nice chip, and I think it has a real future for four-socket systems. The problem is that the real benefit of using the proprietary systems comes at eight sockets and up, an area where Power (and potentially SPARC and Itanium, assuming they get their act together) is still compelling.
W. T. F. !?
"as much as customers like more cores – especially in virtualized environments, where they tend to pin one virtual machine on one core for the sake of simplicity"
I'm not sure how much hands-on virtualisation experience the author has, but I can honestly say that in my 5+ years working with enterprise grade VMware environments, I've hardly ever seen customers do what is suggested above. This goes against the whole concept of what virtualisation is about, and would dramatically reduce the number of VM's you could realistically fit on each host, not to mention seriously undermining the flexibility that you would otherwise get from the environment.
missing the point 2 - it's the software, stupid
fine for server consolidation of Window boxes?
What if you want to do real work? The kind that needs a real OS? Eg HPUX, NSK,VMS. Irrelevant? For now...
Perhaps you would like to explain the absense of your "real" systems in any serious database benchmark tests?
for starters look here:
or even the older tests here where just hpux makes it:
Re: it's the software, stupid
While the workload largely defines the software and hardware required for the task, the trend for HP/Sun/Oracle/IBM/etc RISC UNIX vendors has been that their customers are moving tasks that can run on x86 hardware to the cheaper platform.
The POWER platform is continuing to provide CPU upgrades that provide a performance advantage for that platform, but the SPARC and Itanium platforms look less relevant as each new x86 CPU generation is released as upgrades are delayed.
Intel can (or at least does) afford Itanium at the moment but as ARM/MIPS-based systems either become more competitive in the desktop market OR the smartphone/tablet market causes the existing x86 desktop market to shrink (or both), then Intel will need to reconsider their less cost effective product lines. I see Itanium as one of those unproductive parts.
And as for the arguments about HPUX/NSK/VMS - what is stopping HP porting these to x86 other than it would kill the Itanium platform?
Oracle may lose their spat with HP over Itanium, but it won't stop Itanium dying as a platform (i.e. no further architecture advances, product transitions put in-place, price increases making it even more expensive to remain as a Itanium user).
Double bit error correction
Has been available on the vast majority of Itanium systems systems since before the launch of the Monticito chip, HP's later Madison based systems already had it. Works out nice and cheap too because it just uses the 2 spare ECC bits that memory manufactures have been giving away free and unused for years as they were too lazy to make 6 bit wide chips just to handle the ECC stuff.
If you stick your DIMMs in 4 at a time, you get 8 spare bits which gives you the capacity you need to handle a second erroneous bit in a 256bit wide "word".
IMHO TPM had a little to much of the Xeon Cool aid :)=
There is absolutely no doubt that the current Westmere-EX chip is one of the finest in the industry, at it is clearly at the top of the pack with perhaps only one to challange it.
But when that is said, then best of breed POWER7 beats best of breed Westmere-EX every time, hands down.
And where Westmere-EX servers are brand new, then POWER7 has been around for almost 1 1/2 year in POWER servers. Sure you have to compare what is shipping with what is shipping.
And TPM you have an error in your TPC-H POWER 780 bit, the POWER 780 used in the tpc-h benchmark is the 4 core/chip version, hence it's a 32 core submission. So if you compare it versus the 80 core x3950/3850 (which btw also has 4 times the memory), then the POWER machine has 2.4 times the core performance while delivering 95% of the chip performance. Now that is with 40% of the cores per chip.
So yeah, Westmere-EX is great.. but not the greatest.
Hang on a min, 32 sockets!
The old Nehalems went to 8 sockets without glue, so you are telling me these new E7's go to 32 sockets in a glueless config! 'Kinell, I am impressed, but if the architecture is essentially much of the same then what has changed to enable scalability to 32 sockets over the previous 8?
I note that TPM did state "In theory" so are there any vendors such as HP/Dell etc actually planning a 32 socket box? I'm thinking 320 core's in a single Red Hat image......
I'm knocked for 6, the one thing Intel was missing was that kind of scalability, and now it bloody has it, if 32 socket box's appear, I can no longer justify spending on Large M-Series and P-Series.
Times they are a changin'
Yeah, but it's easily good "enough" for any and all workloads at much cheaper cost and thats the whole point.....
320 cores in a single image is more than enough scalability to make it very very difficult for me justify a large Solaris/AIX box for any new projects.
You can run Solaris on x86 if you need a 32 socket Westmere-EX box. Solaris is the only Enterprise Unix (except BSD) that runs on x86. I personally prefer Solaris on x86, over Linux. But YMMV.
I agree that this Westmere-EX is a fast chip, the mighty POWER7 is only ~10% faster in some benchmarks. But wait until next year, when the Ivy Bridge based cpus arrive. They will be 40% faster, according to Intel. This means the fastest cpu on the market, will be Intel Ivy Bridge. And later, AMD will release it's 20 core Bulldozer cpus. And Solaris runs on them all. :o)
There is TCA and then there is TCO.. There is a big difference on the 'buy and throw away' mentality of the x86 world and the buy, upgrade, upgrade upgrade... of the UNIX world.
I mean the cost of setting up a new server, if you operate in an ITIL environment as I do is wel.. almost more expensive than the actual server.
But if you are a small shop... then you mileage might vary..
Not true ....
The 3-year life cycle of X86 is ideal in current times when the cpu is following moore's law. Any upgrade to UNIX servers after 3-years is a gross compromise. For case the cost of upgrade of individual CPU & Memory is ridiculously high even by the UNIX box standards.
That is tru in X86 world too. Thats the reason there are less that 5% X86 servers that go under an upgrade cycle.( there is nothing that stops you from upgrading Intel servers , may be the socket/chipset get refreshed sooner than Unix platforms).
Don't get me wrong , I come from UNIX world, but there is so only much kool-aid that one can drink. There is 2% scenario where UNIX is suitable & then there is x86 platform.
Yes I know you can run Solaris on x86, it makes sense for Oracle applications only, but you have to remember that it only has a fraction of the application catalogue that RHEL does (or Solaris SPARC for that matter), so for everything else Red Hat is the wiser choice...
On x86, RHEL is the one to beat.
That 10% number again...
It ain't right. Even the source you listed sez that...
What 10% number is wrong? What do you mean? Is the highly clocked POWER7 not 10% faster in some benchmarks? No, the number 10% "aint right"?
So, the correct number is 11.5% in one benchmark, which the article stated. But how do you know POWER7 is exactly 11.5% faster than Intel Westmere-EX? Maybe the correct number is 11.4999%? Are you going to remark on that too? Have you checked it up whether it is 11.5% or 11.4999%? Why dont you check it up, and complain about that rounding too?
No, the point is, that POWER7 is only ~10% faster in some benchmarks. And Ivy Bridge will be 40% faster using the same number of cores - according to Intel.
As far as I know, the Westmere-EX is still only an 8-socket processor - where is all of this talk of 32-socket support coming from?
Thats what TPM says in the third paragraph.
Although I am struggling to find any material to back up that 32 socket claim..
Read carefully what was written...
>> allowing for vendors to create machines that can, in theory, scale from 2 to 32 processor sockets in a single system image.
What he specifically _didn't_ say is that you would be able to go beyond 8 sockets in a glue-less design...
If however you look at the glue'd designs on the market at the moment (most obvious one I guess is the DL980G7) its pretty clear to see how a single system could be scaled to 16 or 32 sockets (heck on the back of the DL980 you can even see the interconnects that might be used to do this!)
And if you consider the similarity from a chipset perspective between Westmere and Tukwila processors now, it's not that difficult to imagine a HP Superdome with x86 processors either...
The biggest challenge as you extend to x86 systems of this size isn't the resiliency of the hardware, it's how the OS interacts with the hardware and firmware during failure conditions - this is one area where Linux and Windows sadly lag behind the commercial UNIX OSs at the moment (I have no idea how well Solaris/x86 handles all this, but my bet is poorly given their general poor showing in this space even on SPARC)
"a HP Superdome with x86 processors"
"if you consider the similarity from a chipset perspective between Westmere and Tukwila processors now, it's not that difficult to imagine a HP Superdome with x86 processors either..."
Everybody with a clue knows that, but be careful where you say it. Round here, you'll wake one of the few surviving IA64 boosters, and he'll accuse you of being an ignorant troll. He won't expand on why it's a silly idea, obviously.
"The biggest challenge as you extend to x86 systems of this size isn't the resiliency of the hardware, it's how the OS interacts with the hardware and firmware..."
Again, everybody with a clue knows that...
"during failure conditions "
Not entirely sure that qualifier needed adding, but I do see where you're coming from. You can't easily swap out a whole Superdome at a time, and there'd probably be some reluctance to swap major parts of a high end Proliant without good reason...
"Linux and Windows sadly lag behind the commercial UNIX OSs at the moment"
Again, yes indeed, with little sign that the problem is even understood, let alone any sign of it being addressed.
"how well Solaris/x86 handles all this, but my bet is poorly given their general poor showing in this space even on SPARC"
Don't know Solaris/x86 myself but when AMD64 came out, one of the Solaris chaps (whose name I sadly can't remember) used to blog about some of the new things AMD64 gave them from a RAS point of view, and it read nicely. What it turned into in the shipping software, I have no idea.
Summary: There's more to "mainframe-class RAS (reliability availability and serviceability)" than just The Chip Inside(tm).
Hampered by architecture
the RISC machines you all are talking about have a different architecture than the x86 platform and that's where they gain their performance advantage. That's why you get IBM iSeries, or a Sun or an HP box. Sure RISC chips perform their instructions faster than CISC, but they have to perform more of them to get the same job done. So in raw processing power as the cores scale up, the advantage of RISC goes away. x86 boxes are catching up and fast. The only problem with all these cores is that the cores are slower.
All cpus are getting RISC like. Even x86 has RISC similar architecture inside the chip. There is virtually no difference between CISC and RISC nowadays. They are all same.
RISC is about ISA...
RISC is about the instruction set architecture. Although x64 has reduced the arcane incompatible memory models of past IA-32 operating environments, it still qualifies as CISC.
Some have attempted to use the term RISC for microarchitecture, but it is very uncommon. RISC is about simplified instruction sets and increased incidence of one-cycle-per-instruction programming.
CISC lovers have always indicated that this is a trade-off because RISC might need more instructions to to the same thing as some CISC programs. It can be true.
Generally CISC makes the circuitry layout more difficult because there is more logic-circuitry in most CISC implementations. Since CISC and RISC both attempt to use on-chip cache to reduce memory latency, the overall transistor counts could end up being similar, though.
- Facebook offshores HUGE WAD OF CASH to Caymans - via Ireland
- Mexican Cobalt-60 robbers are DEAD MEN, say authorities
- Apple's spamtastic iBeacon retail alerts launch with Frisco FAIL
- Submerged Navy submarine successfully launches drone from missile tubes
- Apple sends in the bulldozers as Fruit Loop construction begins