4 posts • joined Friday 23rd July 2010 10:33 GMT
What about memory bandwidth ?
Most HPC real world workload require significant memory bandwidth to keep the core busy.
Even more so for Hadoop style parallel processing.
How much memory bandwidth does the 64 core part have ?
15K growing or 2.5" growing?
Could it be that the 2.5" 15K drives segment is growing at the expense of 3.5" 15K drives, and that the whole 15K HD market is in fact stagnating ?
Where's El Reg critical mind ?
Cache needs to be closer to the server to be efficient
If your disk has 5ms latency and array/network has 1ms, you see 6ms of latency from the server. Let's say some amount of flash will give you a 80% hit rate, with 10 microsecond latency.
If you place the cache near to the disks, you get 2 ms average latency from server.
If you place the cache on PCI-e on server you now see 1.2 ms latency, and you alleviate 80% of the traffic on the array on a read workload, so you end up also saving on array iron.
Fusion-io cards as an NFS IO cache ?
While tackling OS integration, what would really be welcome would be a cacheFS integration, so that NFS mounts to Linux or VMware hosts could hide most of the network latency through a 620 Gb cache.
This would be a huge application performance and network traffic win, while being totally transparent to applications.
(right now our applications cache hot data manually to the Fusion-IO cards. not transparent!)
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Microsoft teams up with Feds, Europol in ZeroAccess botnet zombie hunt
- Storagebod Oh no, RBS has gone titsup again... but is it JUST BAD LUCK?