Can anyone tell me, if network IO latency, in a packed density form can outperform or be on par with a proper storage controller card that lives within the same system.
Reason I ask is, it has been historically true and almost a given that if a process needs a lot of IO, you just have to use the fastest drives available and spread the data evenly across multiple drives to get the best possible performance, and only if you exhausted a resource on a single machine should you scale horizontally but this article mentions "As El Reg has explained... in 2009, not all workloads need lots of compute or memory; some need far more I/O." as if to say, spreading horizontally across through network io that would incur the normal on-board latency penalty + transport latency using multiple blades can be performance wise better for IO or performance in general?
If not, then why on earth are endless amount of ARM / microservers articles that keep cropping up every month like subliminal messages that keeps trying to brainwash you into thinking they would put to death Intel and their big iron servers? This is getting old, and annoying without real numbers.
Can El Reg do a special and review market data released by Gartner and IHS iSupply every year instead? I'm more interested in knowing how badly and how often market research firms gets it wrong.