Intel can't hold a press conference these days without being harangued about ARM-based servers and the potential for microservers based on low-powered processors to bite into its Xeon server-chip biz. And for good reason: there is a growing consensus that these baby servers are going to catch on because of the inherently …
I don't know why people don't design their own supers.
Design a small circuit board, could be 6 inches square and still have a dozen SoCs connected together on there with some sort of PCB based ethernet. then a power and ethernet connector on the board itself to connect them together. I would try and standardise the voltages needed on the board i.e. only 3 volt components or only 5 volt components or whatever.
It gets me thinking. Why would I pay over the odds for a Raspberry Pi that won't ever actually get here when I could design the same chips onto a board minus the crap i dont need, add in the clustering features I do need, and the boards can be manufactured in China and sent over in no time at all. Ok its gunna cost a shit load and I don't actually have the skills to do that. But it's an appealing prospect. <16 low power CPUs on a board is going to end up being cheaper to manufacture than millions of tiny boards. And I want an ethernet switch on every single board, built in.
Can anyone tell me, if network IO latency, in a packed density form can outperform or be on par with a proper storage controller card that lives within the same system.
Reason I ask is, it has been historically true and almost a given that if a process needs a lot of IO, you just have to use the fastest drives available and spread the data evenly across multiple drives to get the best possible performance, and only if you exhausted a resource on a single machine should you scale horizontally but this article mentions "As El Reg has explained... in 2009, not all workloads need lots of compute or memory; some need far more I/O." as if to say, spreading horizontally across through network io that would incur the normal on-board latency penalty + transport latency using multiple blades can be performance wise better for IO or performance in general?
If not, then why on earth are endless amount of ARM / microservers articles that keep cropping up every month like subliminal messages that keeps trying to brainwash you into thinking they would put to death Intel and their big iron servers? This is getting old, and annoying without real numbers.
Can El Reg do a special and review market data released by Gartner and IHS iSupply every year instead? I'm more interested in knowing how badly and how often market research firms gets it wrong.
"What is clear is that [everybody except Intel] are looking to benefit from the microserver craze as Intel tries to hold the line with Atom and low-powered Xeon chips."
What line does Intel have to hold today, or for that matter, in 2013, when they just effectively annihilated AMD from both the desktop and server market and ARM's offering isn't due till 2014?
"Data centers to go bonkers over microservers" ? I was expecting you to tell me it will at the very least have a 20% market share in the overall server market in DCs in the coming 3 years.
The whole microserver / ARM thing reminds me of loud noisy chihuahuas.
Market research analyst "reckons" what sales are for two years; inflates into full article full of ethereal statements, guesses, and ...ahem....insight.
Fluff the popcorn some more to show how full the bin could be.