ARMs in servers and x86 in phones. What is the world coming to?
Calxeda, the ARM server-chip upstart that HP tapped for its "Redstone" hyperscale servers last November, is getting ready to ramp up production on the server cards that use its quad-core EnergyCore ARM processors, and is making waves with benchmarks while promising to do a better job with comparative testing against x86 …
Could be an end.
Could be the beginning.
Arn't these modules supposed to be packed into an ultra dense chassis?
If so, then Shirley having sata ports positioned where they are in the module would give rise to cabling nightmares. They should have pushed them all to the edge of the module, or offloaded them entirely onto a separate backplane.
Or is it just me being ultra dense?
can it run ARM x264 at good speed ;)
Dell are idiots for using Marvell's Armada XP 78460 to be different , as all the Armada XP are missing the generic Cortex NEON SIMD and only have the lesser mobile MMX on board , given that they are using these as server chips then you want as much SIMD optimizations as you can get in your generic ARM Linux for server install and only Neon in the form of the Linaro members are doing that SIMD Optimization on all their tool chain and apps, to restate it simply "No NEON on board ,No Good" for running your apps at their full potential speed.
its also not really clear yet if even the HP re-engineered Cortex version are real Cortex-A9 core with generic NEON A9 SIMD on board or something else like having the newer and said to be better A15 NEON SIMD hard IP integrated into it as an initial showcase server SOC, although the 10Gb/s internal fabric seem very nice and i assume reasonable RAM speed.
Karl Freund will you please clarify these NEON SIMD and other points please.
OC i find it a real shame if they are going to make an ARM "Cortex" for the 2012/13 server space then why the hell didn't they also use the initial foundry A15/NEON hard core with real super fast "wide IO" ram as standard to start with if they were going to re-engineer and add the 10Gb/s fabric etc anyway to make their Cortex version,
its not like they are not already working directly with ARM and the foundries that have already made these or cant get access to the new super fast "wide IO" ram and controller on SOC block for this combined Server SOC use in limited supplies until the real ramp up.
Gigabit ethernet link saturated at 6950 reqs/s? You know, even Fast Ethernet (100mbit/s) link can deliver up to 148000 req/s IIRC and sometimes even more (with some non-fully compliant implementaions).
148809 reqs/s max using 64 byte frames for fully compliant Fast Ethernet implementation. If interframe gap is omitted, then it's possible to get up to 173611reqs/s. With gigabit ethernet just multiply these figures by 10.
It is written that 1.000.000 of 16KB requests are made. So it is 16.000.000Kb or 16GB which is a lot for 1Gb.
Is is written as 1.000.000 reqs. of 16kb. So it is 16GB which is heavy for 1Gbit.
Xeons do not run at low loads any more
"The basic assumption in those numbers is that the x86 is not running at anywhere near peak, as is the case in most data centers of the world most of the time."
Another fallacy; virtualization is at the 60%+ level today and climbing. Almost-idle Xeons are becoming the exception, not the rule.
Won't an Intel X540-AT2 do the trick?
Doesnt putting an 2 port 10Gb Intel X540-AT2 card to Xeon system for 120$ do the trick? It will boost to somewhere at least and keeping in mind the server prices, card is not that expensive i guess??
And Calxeda only states their EXC-1000 supports 10GE on fabric switch, doesnt it mean system still needs an interface? That also means there are some kind of interface and connectors for big box. So why didint they provide some numbers how their system perform well on 10GE with internal GE support??