What, are they not also supporting the Itanium processor here?
'Til heftier engines come aboard, HP Moonshot only about clouds
The HP Moonshot hyperscale servers are not even fully launched, and Intel and Calxeda are already bickering about whose server node is going to be bigger and better when they both ship improved processors for the Moonshot chassis later this year. Other engines will be coming for the Moonshot machines, too, HP execs tell El Reg, …
-
Tuesday 9th April 2013 09:07 GMT J.Kleen
Moonshot 1500 dimensions
Although the Moonshot 1500 consumes 4.3U rack space (for just 180 servers); the rack rails delivered with its chassis are equal as used in the just announced HP SL4540 Gen8 http://www.hp.com/servers/sl4540 . The rack rails are flexible in design and placement; as such when installing 3 of these Moonshot 1500's in a rack they will ONLY consume 13U Rackspace; or 39U for just 1620 servers. That leaves with a 42U rack still 3U available for ToR switches IF at all required.
-
Tuesday 9th April 2013 09:50 GMT Anonymous Coward
"You can come out with something at the speed of need"
...but ye canna change the laws of physics.
That is, if you stick 450 Xeons in that much space, you're going to blow the power and heat limits of a standard rack out of the sky.
Anyway, what happened to Seamicro who were doing all this 2-3 years ago?
http://www.theregister.co.uk/2010/06/14/seamicro_sm10000_server/
http://www.theregister.co.uk/2011/02/28/seamicro_atom_server_64bit_upgrade/
Looks like they were bought by AMD but are selling boxes with Intel processors?
http://www.seamicro.com/SM10000
-
Tuesday 9th April 2013 18:33 GMT Anonymous Coward
Re: "what happened to Seamicro?"
SM10000-64HD puts 768 Atom cores in 10U. (76.8 / rack unit)
Moonshot puts 90 of them in 4.3U. (20.9 / U)
SeaMicro is 3.6x as dense. But not as adaptable to new processors as the HP 1500 chassis. HP would appear to have sacrificed density for flexibility. It's not clear the market will reward them for that.
-
-
Tuesday 9th April 2013 10:15 GMT Mad Mike
Power v Performance
As a general rule, for any given generation of chips, power consumed is in direct proportion to the processing power delivered. Yes, an Atom uses less power, but then it delivers a lot less processing performance as well. So, at best, using less capable chips is simply removing the need to have hypervisors to partition up bigger processors to ensure efficient loading.
All this stuff talks about x (general hundreds) of servers etc., but each is substantially less powerful than Xeon and Opteron x86 chips. So, are you really getting more processing power in a given space, or simply cutting it up without the need of a hypervisor? I don't really see the point of this for datacentre processing, as cutting up bigger servers with hypervisors allows much more processing power and workload to be supported than trying to use smaller servers. The flex the hypervisor gives allows different workloads to take the same processor at different times.
It all rather looks like an attempt to fix a problem that has already been fixed. As people have said, what happened to Seamicro?
Now, for laptops and other areas where power consumption can be traded down along with processing performance, this sort of thing makes sense, but that simply doesn't apply in the datacentre. Do HP really think they've come up with something no other manufacturer has thought of. Or, beaten them to the market?
-
Tuesday 9th April 2013 10:41 GMT Alan Brown
Re: Power v Performance
Usually power requirements scale up much faster than performance, so moreyou might more accurately say an atom delivers less processing performance at a lot less power.
Not that X86 has ever been particularly power efficient in any iteration.
As for datacentre use: Virtualisation immediately costs between 10-30% of available system resources before you even start to run anything on it and there are a bunch of reasons for wanting to run a bunch of low power machines, rather than virtualise 'em in one high power box (such as being able to shut 'em down entirely to save even more power). Even so, HP seems to have developed a solution looking for a problem.
-
Tuesday 9th April 2013 10:59 GMT Mad Mike
Re: Power v Performance
I agree that power drain goes up faster than processing power, but the latest power efficient Opterons and Xeons are pretty low power and still pack a much bigger punch. If you go for the special editions, you'll always pay well over the odds in both power and money.
Virtualisation shouldn't cost 10-30%. Yes, x86 virtualisation is more expensive than say PowerVM, but you should really be able to get it at 10% max. It all depends on configuration and how much care you take. The perceived cheapness of x86 systems often results in companies just deploying another server rather than running a more efficient deployment model.
I agree there are reasons for running small low power servers rather than virtualise, but this is a small segment again. Even with virtualisation and current 'motion' technology, you can easily move everything to a smaller number of x86 servers and shut the rest down when required and then open them back up again when required. Again, it depends on deployment model and taking the time to do it right.
I do totally agree though. HP have created a solution to at best some niche issues.
-
-
Wednesday 10th April 2013 00:42 GMT Wilco 1
Re: Power v Performance
CPU power is quadratic with performance due to voltage/frequency scaling. Assuming all else equal it is also true when you compare high-end CPUs with lower end ones: big, fast, out-of-order CPUs scale non-linearly with increased complexity and need to use leaky transistors to achieve their high frequencies. However it is not necessarily true when comparing between different micro architectures, implementation quality, processes or ISAs (eg. Centerton doesn't look good compared to Calxeda).
In general, if you have a parallelizeable problem, using several slower cores will be more power efficient than one fast core. You can't use too many slow cores though, as the overhead of DRAM, communication etc will eat away the efficiency gained. So the trick is to find the sweet spot where the amount of work done per Watt is maximized. My gut feeling is this is just the start, the next generation of microservers in 2014 will become appealing enough to a wide market.
-