back to article HP busts out new ProLiant rack mount based on Intel's new top o' line server chippery

HP has released a new member of its "scale-up" line of rack-mount servers, the ProLiant DL580 Gen8, powered by Intel's new Xeon E7 v2 server chips, and said that "enhancements" for the ProLiant DL560 and BL660c Gen8 scale-up x86 servers will be revealed next month. HP ProLiant DL580 Generation 8 (Gen8) rack-mounted 'scale-up' …


This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    "But what separates HP from everybody else in this room is that we tend to innovate on top of industry architectures."

    Explain Itanium then.

    Also, more memory by adding more modules doesn't help with memory intensive applications as HP is still bound by the memory controller on the CPU. You can double, triple, quadruple the number of DIMM's attached to the controller to increase the amount of RAM, but you still have the same memory bus speed.

    1. Matt Bryant Silver badge

      Re: AC

      ".....Explain Itanium then....." You do realise that, at launch, there were many Itanium server vendors, inlcuding IBM (who sold 10,000+ units to their customers despite the IBM salesgrunts pushing mainframes and their Power-AIX in preference) and Fujitsu, yet hp very quickly built up a 90% share of the Itanium market because they did it BETTER than IBM or Fujitsu. As part of Intel's strategy of attacking high-end computing from below with x86 and above with Itanium, Itanium killed competing designs such as MIPS, Alpha, PA-RISC and UltraSPARC. How long IBM continues with Power is very doubtful - where's the public roadmaps with actual commitmets? Itanium may be reaching it's end too, but it did what Intel wanted - got them taken seriously in the high-end, mission critical space.

      "....more memory by adding more modules doesn't help with memory intensive applications as HP is still bound by the memory controller on the CPU...." So you missed the bit about the actual bandwidth available on each memory controller?

    2. Anonymous Coward
      Anonymous Coward

      Bye Bye last of the legacy midrange UNIX / Linux systems....This should allow us to replace our last remaining hold outs with Wintel....

  2. Paul J Turner

    Obviously not

    "We'll do the math for you: with 32GB sticks, eight of these modules contain 96 DIMM slots adding up to 3TB of memory; 64GB sticks double that to 64TB (obviously)."

    1. EvanPyle

      Re: Obviously not

      32GB sticks gives you 2.51TB of RAM and 64GB sticks gives you 5.03TB of RAM, did someone trust a vendors numbers?

      Also I think that's some exponential doubling going on there :)

      1. DavidRa

        Re: Obviously not

        Evan ... I think your calculator is broken.

        32 x 96 = 3072 (if you prefer, 2^5 x (3 x 2^5) = 3 x 2^10 = 3 x 1024 = 3072

        That's 3TiB in my book.

    2. diodesign (Written by Reg staff) Silver badge

      Re: Obviously not

      Arghghg - that was my fault :-( Slip of the keys. It's been fixed. Please - next time, email corrections@thereg so these can be fixed asap.


      1. Captain Scarlet Silver badge

        Re: Obviously not

        But but I want a machine with 64TB of memory :(

        1. Anonymous Coward
          Anonymous Coward

          Re: Obviously not

          does it really have to be a single machine?

          1. Nigel 11

            Re: Obviously not

            There's at least one problem class where all-local RAM helps. Big sparse matrix calculations, as often encountered in engineering modelling. Wonder what HP *does* charge for maximum RAM? We once got a quote for an HP system thart could support 1Tb RAM, but the price for that configuration was so exorbitant that our scientists went for two commodity HPC servers maxed out with 0.5Tb RAM each ... and quite a lot of change.

            1. JLH

              Re: Obviously not

              I agree with Nigel 11. There are problems where you need large amounts of RAM - as you say in engineering simulations, where you have very fine meshes. Or in bioinformatics,

          2. MadMike

            Re: Obviously not

            "....does it really have to be a single machine?


            But this ScaleMP is not a single machine, it is a cluster! :) It runs a software hypervisor that tricks Linux into believing it is a single SMP server - which is not. Read my other lengthy post below for more details.

        2. JLH

          Re: Obviously not

          " But but I want a machine with 64TB of memory :( "

          Buy an SGI Ultraviolet. Simples.

          Seriously - you can spec one of these with 64 Tbytes of memory.

          1. MadMike

            Re: Obviously not

            "...Buy an SGI Ultraviolet. Simples.


            Seriously - you can spec one of these with 64 Tbytes of memory...."

            Well, the SGI UV2000 and the predecessor Altix servers with 64TB RAM are clusters. And they are only fit for HPC number crunching embarassingly parallel workloads. Even SGI says so. There is another Linux server as big as the SGI UV2000 server, ScaleMP sells a Linux server with gobs of ram and 10.000s of cores - and guess what? Is is ALSO a cluster. All servers bigger than 32 or 64 sockets, are clusters (sure they run a single image Linux kernel, but they are clusters, yes).

            If you are going to do enterprise workloads (large databases, ERP systems, etc), then you need to go to SMP alike servers, and the largest SMP alike servers have 32 sockets (IBM P795, HP Itanium Superdome/Integrity, SPARC M6-32) or even 64 sockets (Fujitsu M10-4S).

            HPC number crunching are typically running a tight for loop in the cache, on each node. So each node does not communicate too much - so this is great work for clusters such as the SGI or ScaleMP servers. Enterprise workloads, have code that branch everywhere so you need to visit all code everywhere - and for this type of workload where every cpu communicates all the time with each other - you need SMP alike servers (IBM P795, Oracle SPARC M6-32 etc). There are clustered databases running on clusters, but they can not replace a SMP database. A cluster can never replace a SMP server.

            So the largest Linux servers are ordinary 8-socket x86 SMP-alike servers, and the SGI and ScaleMP clusters with 10.000s of cores and 10s of TB. There are no 32 socket Linux servers on the market, and has never been. Linux scaling is ineffecient when you go beyond 8-sockets. Sure, you can compile Linux to 32 socket unix servers - with terrible results. Google on "Big Tux", and read how Linux had ~40% cpu utilization on the HP Integrity/Superdome 64-socket server - that is really bad. (I always mix up HP Superdome and Integrity). But when you compile Linux to Unix servers, you get less optimal results. And besides, the SMP servers are very very very expensive, who would run Linux on a SMP server?? For instance, a 32 socket IBM P595 server used for the old TPC-C record costed $35 million (no typo). Who in earth would run Linux on a big SMP server?? Linux does not scale well beyond 8-sockets in the first place. It is better to buy a 10.000s of core Linux server for a fraction of the price of a large SMP server. Linux servers with 10.000s of cores are very very cheap. 16/32 socket SMP servers (Unix/Mainframes) are extremely expensive in comparison to a cheap cluster. A cluster costs you the same price as each node + a fast switch, basically. A SMP server needs lot of tailor made tech to scale to 16/32 sockets and that costs very much, because ordinary cpus does not scale.

            So, there is a reason you will never see Linux in the enteprise market share: it is dominated by large SMP servers running enterpise business workloads (Unix / Mainframes), and unless Linux will scale beyond 8-scokets, there is no chance in hell Linux will venture into high end Enterpise segment - where the really big money is. Or, all Wall street banks would instead buy cheap Linux clusters with 10.000s of cores, to replace their extremely expensive 16/32 Unix / Mainframe servers.


            SGI and ScaleMP says their largest servers, are clusters (that is, they are only used for HPC number crunching, and can not do SMP Enterprise workloads)


            "...The success of Altix systems in the high performance computing market are a very positive sign for both Linux and Itanium. Clearly, the popularity of large processor count Altix systems dispels any notions of whether Linux is a scalable OS for scientific applications. Linux is quite popular for HPC and will continue to remain so in the future, ... However, scientific applications (HPC) have very different operating characteristics from commercial applications (SMP). Typically, much of the work in scientific code is done inside loops, whereas commercial applications, such as database or ERP software are far more branch intensive. This makes the memory hierarchy more important, particularly the latency to main memory. Whether Linux can scale well with a SMP workload is an open question. However, there is no doubt that with each passing month, the scalability in such environments will improve. Unfortunately, SGI has no plans to move into this SMP market, at this point in time..."

            ScaleMP's large Linux server is also only used for HPC:


            "...Since its founding in 2003, ScaleMP has tried a different approach. Instead of using special ASICs and interconnection protocols to lash together multiple server modes together into a SMP shared memory system, ScaleMP cooked up a special software hypervisor layer, called vSMP, that rides atop the x64 processors, memory controllers, and I/O controllers in multiple server nodes .... vSMP takes multiple physical servers and – using InfiniBand as a backplane interconnect – makes them look like a giant virtual SMP server with a shared memory space. vSMP has its limits....The vSMP hypervisor that glues systems together is not for every workload, but on workloads where there is a lot of message passing between server nodes – financial modeling, supercomputing, data analytics, and similar parallel workloads. Shai Fultheim, the company's founder and chief executive officer, says ScaleMP has over 300 customers now. "We focused on HPC as the low-hanging fruit..."

            A developer in the comments explains:

            "...I tried running a nicely parallel shared memory workload (75% efficiency on 24 cores in a 4 socket opteron box) on a 64 core ScaleMP box with 8 2-socket boards linked by infiniband. Result: horrible. It might look like a shared memory, but access to off-board bits has huge latency...."


            If you really need true a single 32TB RAM server which is not a cluster, you need to go to Oracle SPARC M6-32 which is the only SMP alike server on the market with that much RAM. Running databases from memory will be extremely fast. (Memory database Hana is a cluster and does not count, it can not do SMP workloads as each node has to communicate with each other). Also, the new SMP alike Fujitsu Solaris M10-4S server with 64 sockets, has 32TB today and will have 64TB RAM with the new memory sticks. I think the largest IBM Mainframe has 3.5TB RAM? And the largest IBM P795 with 32 sockets has 4 (or is it 8) TB RAM? And the largest HP Superdome/Integrity has 2 or is it 4TB RAM? Matt Bryant, can you please fill us in?

    3. Anonymous Coward
      Anonymous Coward

      Re: Obviously not

      It's "Maths." As in short for mathematicS.

  3. seven of five

    "a "simple thing" – a light-up "do not remove" indicator – that warns admins not to yank out a drive while it is reading, writing, or being rebuilt,"

    like an activity led? Now thats smart, why hasn´t somebody else... oh, wait.

    This kind of error usually occours if the system owner saves on human ressources (either through lack of sleep or lack of education)

  4. Anonymous Coward
    Anonymous Coward


    With all that "innovation" it will probably require a string of firmware updates just after it goes out of warranty.

  5. Matt Bryant Silver badge

    Shame about the nasty front cover.

    I would have much prefered a pic of the new DL580 without the nasty gate on the front. I really don't see why hp and Dell persist with them, don't they know racks have doors?

    1. Captain Scarlet Silver badge

      Re: Shame about the nasty front cover.

      Yeah they should make it so the server looks like a Ninja!

      Ninja's make everything cool

      1. seven of five

        Re: Shame about the nasty front cover.

        Judgeing by your username, I´d exepected you to suggest pirates.

        1. Matt Bryant Silver badge

          Re: seven of five Re: Shame about the nasty front cover.

          "..... I´d exepected you to suggest pirates." Most CIOs get a bit worried and start muttering about software audits when they hear about pirates in their DCs. Best to keep them blithely unaware.

        2. Admiral Grace Hopper

          @seven of five

          "I´d exepected you to suggest pirates."

          Piracy? No.

          Indestructability? Yes!

  6. apleszko

    What about DL980 Gen8 and x86 Superdome??

    It seems HP delayed the development on 8+ sockets... maybe due to problems with the node controller...

  7. MrSeaneyC

    True innovation!

    Putting a button on the DIMM modules to show which DIMM is bad when your press it... Just like the light path diagnostics IBM has been (or rather, was) using on its scale up X series boxes for years! And probably other manufacturers no doubt.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2019