back to article HP Project Moonshot hurls ARM servers into the heavens

Hewlett-Packard might have been wrestling with a lot of issues as CEOs and their strategies come and go, but the company's server gurus know a potentially explosive business opportunity when they see it. That is why HP has put together a new hyperscale business unit inside of its Enterprise Server, Storage, and Networking …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    I remember....

    When fortune 500 companies used a beige box and an isdn line for their web serving and email needs. Tech support was via tin cans and a bit of string. Those were the days.

    1. Matt Bryant Silver badge
      Go

      RE: I remember....

      "Bit of string? You were lucky! We had to make our own string out of our own hair! And we didn't have cans, no, cans were for fancy companies. We had to staple our left hands into cup shapes and attach the human hair string to a toothpick shoved under the fingernail of your index finger. We had to get up for work three hours before we went to bed, work twenty-five hours a day on the helpline, and come payday our boss would chop us up into little pieces!"

      "Aye, those were the days!"

      /apologies to Palin, Chapman, Jones and Idle.

  2. All names Taken

    Well done HP

    Maybe the experience will put them in lead position of non-intel and non-AMD (and non-IBM?) working methods?

    1. bazza Silver badge
      Thumb Up

      @All Names Taken

      Well done HP indeed. I think that the power consumption figures quoted in the article will certainly attract a lot of interest. There would appear to be a lot of real world applications out there (web serving) that would benefit handsomely.

      I'm beginning to wonder whether HP have rediscovered their R&D mojo. Their enthusiasm for ARM could really pay off big time. They are also putting a lot of effort into memristor memory technologies from which they could easily end up owning what is currently the whole DRAM, HDD and FLASH markets. These are two very big bets indeed, and the payoffs would be truly monumental.

      All the other tech companies out there should really be paying attention. HP are technologically very close to stealing large fractions of their businesses from underneath their noses. Some of them need to get on the ARM bandwagon very quickly. For others I think it is too late - Hynix have already done some sort of deal with HP.

  3. Matt Bryant Silver badge
    Meh

    Coming to a webfarm near you soon!

    "......"There are a lot of customers that I have talked to who think 32-bit is just fine," says Santeler. The chips will probably be good at web serving, web caching, and big data chewing workloads where processing data in smaller bits is the norm, not the exception......"

    In other words, as long as the product sits in the same group as the x64 Proliant servers, it will always be held back just enough to make sure it doesn't impinge on Xeon/Opteron Proliant sales.

  4. Jamest NSW
    Happy

    Life in the old dog yet

    It great to see HP do something that looks like engineering not just marketing and badge shifting.

    More power to the engineers on this and the bosses who let em go there

  5. Matt Bryant Silver badge
    Meh

    Hmmmmm.....

    OK, a thumbs up to hp for being the tier-one vendor first-out-the-gate with a product, but there are some points which make me go "meh":

    1. Big issue - not 64-bit. I just don't have much 32-bit requirement outside virtual desktops, and seeing as that's going VDI it won't go ARM. I assume hp were worried about a 64-bit ARM version cannibalising some low-end Proliants, but other vendors without a Xeon range won't be so bothered by going straight to 64-bit versions.

    2. Why the SL chassis, why not a real blade chassis like the C7000?

    3. If I lose a CPU I assume I have to replace a whole 4-CPU card, so I also assume I lose four running images (if I'm running one image per CPU, it could be more) to fix a problem with one.

    Sorry, I'll wait for the Atom variant or the 64-bit ARM one, thanks.

    1. Adam 73
      Thumb Up

      Re:Hmmmmm.....

      These are not aimed at traditional DC/Enterprise workloads.

      They are aimed at the likes of true hyperscale customers (read 100,000 plus servers), say Facebook or linked in. They perform HUGE amounts of web facing tasks (hence the mention and drive in this space) and you have to completely rethink your architectures (application design, storage, through networking and now the server). All the intelligence is in the software, read MASSIVE parallelisation, lots and lots of low power, low performance nodes are perfect here thanks!

      1. See my first point, maybe an issue when you need to access lots of memory, but web servers and scale out apps dont (the scale out gives you the access to large resource, more than you will ever get with scale up), 32-bit is perfect here thanks! (seeing as most web apps are still 32bit anyway!). These are NOT aimed at traditional workloads.

      2. Why not? Cheap and cheerful, all thats needed is a bit of metal to let me rack it. I dont need fancy management or clever interconnects, just a bit of shared power and cooling (dont really need full HA in these subsystems either) and the density. Traditional blade chassis are aimed at solving a different problem (and are very good at that).

      3. Why is that a problem? To be hyperscale you need to run 100,000 plus servers, you really think loosing 4 is going to be a problem? You will also note that you cant hot swap boards either, when you want to swap them (simply wait till over half of them are failed), just take the whole tray out (72 servers) and slap a fresh one, chuck the other in the bin.

      Also you will notice these dont have storage either, they are truly stateless i.e, my OS is pulled from an image server on boot and run in memory, I need lots and lots of the same thing, why stick it on local disk for every server (also makes tasks like reimaging a complete doodle, change the image on the master server, then just reboot whole racks!)

      These hyperscale customers shunned enteprise type architectures (hence no blades) as there is simply too much control and intelligence in them (which adds cost and complexity) and quite frankly things like seperate storage arrays just DO NOT scale big enough. All of that is in their software layers. This is why the likes of Dell DCS and HP SL line were initially bought out. Benefits of shared power and cooling (not quite as eficient as blades, but good enough) and the density, but no excess "crap". However the weakness was always the processor, too good for what is needed.

      These things draw about 3w a core, even the best Intel Atom draws about 15w! (now scale this across 10's of thousands of servers and see why people are getting excited!)

      Think of it this way, we only introduced virtualisation (hypervisors that is) because CPU's have got too powerful and we needed a way of carving them up. We initially started using virtualisation in a big way on web front ends, where the problem first started to appear. All we did was add cost and complexity to mask the real problem (begs the question that if we can perform much more granular hardware partitioning, ultimately why stick a software layer in the way?)

      So why not fix the actual problem. Im not suggesting these will replace computing as we know it, but its a damn fine way of solving a large percentage of the workload in the simplest way possible!

      All that was required was for someone to think a little bit outside the box .................

  6. Steve McPolin

    Moonshot?

    How about Iceberg? Runs cool, and sinks Itanics. Marketing folk, sheesh...

This topic is closed for new posts.

Other stories you might like