Any chance of one of those in a handheld?
Or are netbooks still illegal?
How does a server node including a processor, memory, and fabric interconnect that only consumes 5 watts under load grab you? How about 120 server nodes in a 2U chassis? ARM server chip startup Calxeda, formerly known as Smooth-Stone, is lifting the veil a bit on its future processors, which are in development now. Calxeda is …
Or are netbooks still illegal?
Would have thought InfiniBand for sure. That's the way they did it at Newisys.
That's the way Newisys did what?
My memory may be failing me here, but didn't Bill Gates a few years ago demo on stage Windows Server running on a 64 CPU (Opteron?) machine split out into a rack full of 3U boxes? I understood that was put together at Newisys using interconnects they designed. I only recall it because I thought it was pretty neat at the time.
Wonder how these compare to graphics-card type processors for the workloads that they are designed for?
GPGPUs will likely be a GPU (i.e. something good at large scale parallel floating point calculations) with some sort of CPU as a front door.
low power consumption, not hugely quick (surprisingly good though) but easy to have lot of them
high power consumption, very quick for Large Sums (but not worth it unless you can exploit the paralellism of the GPU), and not likely to be as simple to have a lot of them
Well suited to their target workloads, but those workloads are very different.
"How does a server node including a processor, memory, and fabric interconnect that only consumes 5 watts under load grab you?"
It doesn't. Absent performance metrics, power metrics are completely meaningless.
With the information given so far, any conclusion would be based more on assumptions than fact.
Tell me what it can do for 5W, then maybe I'll consider it.
>Tell me what it can do for 5W, then maybe I'll consider it.
Good point. I can tell you that 4W will get you a box that can act as a media server (TVersity and SqueezeCenter) and home mail server but that's an Atom not ARM. Perfectly adequate for what it spends most of its time doing but it's a bit clunky when you're administrating it. Still - it is running Windows 7 and only has 1GB of RAM. It also probably doesn't help that I activated disk compression when I installed it.
It's a Fit-PC2 and not aimed at the same market :)
Intel are behaving like an old matador who is gradually realising that they've used up their last trick, and that the new bulls (with an ARM banners fluttering from their horns) are turning out to be unexpectedly hard to deceive.
Intel really are running out of time, and if they don't do something dramatic very quickly they might suddenly find themselves with a much reduced server market. Power consumption is rapidly becoming hugely important in the server world, and so far Atom doesn't appear to make the grade. ARM based chips are clearly quite capable - the performance of mobile devices is ample demonstration of that - so why stick to x86?
I doubt it.
Arm chips are great, and a want a multi-core arm chip in my next netbook, however, people will buy Intel because they are Intel.
For a substantial proportion of the last 10 years, AMD have had superior Power consumption to equivelent intel chips, but has that stopped the beancounters buying quad Intel boxes for their fileservers? Fraid not.
The world + dog is designed on x86. RISC runs apps and techno bling.
120 5 watt cpu's = 600 watts + cooling load + storage load + $1000 for 5u case = 1200 watts or so + big bucks. Hardly a deal.
8 cores of Opteron or Zeon is far superior. in a 2u case, or tower case.
I know how about using 1000 hamsters on wheels to generate electricity!! All it will cost is grain.
5W sounds good, but let's compare apples and apples. This is a uni-server - no virtualization - so it can run only one instance. A typical dual core Westmere can run 96 virtual machines, so we need to compare 96 of these ARM units with 1 Westmere.
96 times 5 W is 400W, which is more than the Westmere board takes.
Price-wise, I can't see this at less than $125 in volume buys, which means that the box will run to around $12,000 to $14,000. The Westmere is around $3500 in volume buys. On top of the system cost, those 120 Gig-E or 10GE ports will need some switches, adding yet more cost.
So, the reality is the ARM approach means more power and more price than the mainstream alternative. That suggests the real play for the ARM unit may be low-end tablets and point-of-display systems, rather than HPC.
So you would only need 24 ARM chips in your example, ie. just 100W. Typical SoCs are $20-$40, so $125 is unreasonably high. More likely it would be around $60 per chip, so the CPU cost would be $1440. Aggregate performance of 96 ARM cores would be many times that of a dual core Westmere, even more so as you avoid the overheads of virtualization. So even in your unlikely comparison there is at least an order of magnitude improvement in overall efficiency: half the cost, similar power, 5-10x the performance per virtual machine.
I agree with Wilco1, I have seen very similar silicon from another vendor and the estimates above tie in with that. The only unknown is the cost of the interconnect but if they do it right that need not cost the earth.
What are these ARM software containers that are mentioned in the article?
Firstly you are assuming that everybody's workload consist of VMs. That is not universally the case. Plenty of workloads out there would fully saturate the CPU power of these A9s. That's when the performance per Watt is heavily in ARM's favour. Secondly it would be odd if this proposed system relied on external switches. I anticipate something along the lines of VXS or OpenVPX ie internal interconnect switching.
And on the topic of virtualisation I suggest you mug up on the ARM A15 which does support virtualisation and it's only a matter of time before that gets OS support (if it's not already there). Then there will be nothing left for intel to brag about except for outright performance per thread. But the supercomputer boys seem to prefer AMD for that. And on that topic I think AMD should license ARM cores sooner rather than later.
Last time I looked at HPC, which was admittedly a while ago, there was no place for hardware-level virtualization, but was plenty of scope for smart scheduling, load management, and the like.
This lack of hardware-supported virtualization is a problem here why, exactly?
Modern 'virtualization' is a mechanism to allegedly reduce the overhead caused by ill-behaved operating systems and their ill-behaved applications. One such problem is the default "one app, one server" approach used in the default IT department. In its attempt to remove the coexistence issues it replaced them with issues of space and power, amongst other things.
Before Microsoft Windows made ill-behaved applications and operating systems a "feature" rather than a failure, virtualization meant something rather different, usually just "virtual memory" and things closely related thereto, because most applications were sufficiently well behaved as to co-exist on one box. One box which could not only cope with multiple concurrent applications, it could cope with multiple concurrent users too. Fancy that!
Tell that to the certified Microsoft dependent youngsters of today, and they'll think you're kidding.
Also, in the article, TPM says: "Because the Cortex-A9 is only a 32-bit processor, the Calxeda server nodes will top out at 4 GB of main memory per node."
Which is of course rubbish. The PDP11 (a 16bit processor) frequently had up to 4MB of main memory (22bits, ie rather more than the 64kB TPM's logic would imply). Some pre-AMD64 Xeons could cope with more than the 4GB of main memory that TPM says is the maximum a 32bit processor can support. Etc.
Not sure it matters, but there you are.