4 posts • joined 23 Jul 2010
What about memory bandwidth ?
Most HPC real world workload require significant memory bandwidth to keep the core busy.
Even more so for Hadoop style parallel processing.
How much memory bandwidth does the 64 core part have ?
15K growing or 2.5" growing?
Could it be that the 2.5" 15K drives segment is growing at the expense of 3.5" 15K drives, and that the whole 15K HD market is in fact stagnating ?
Where's El Reg critical mind ?
Cache needs to be closer to the server to be efficient
If your disk has 5ms latency and array/network has 1ms, you see 6ms of latency from the server. Let's say some amount of flash will give you a 80% hit rate, with 10 microsecond latency.
If you place the cache near to the disks, you get 2 ms average latency from server.
If you place the cache on PCI-e on server you now see 1.2 ms latency, and you alleviate 80% of the traffic on the array on a read workload, so you end up also saving on array iron.
Fusion-io cards as an NFS IO cache ?
While tackling OS integration, what would really be welcome would be a cacheFS integration, so that NFS mounts to Linux or VMware hosts could hide most of the network latency through a 620 Gb cache.
This would be a huge application performance and network traffic win, while being totally transparent to applications.
(right now our applications cache hot data manually to the Fusion-IO cards. not transparent!)
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Analysis Oh no, Joe: WinPhone users already griping over 8.1 mega-update
- Leaked pics show EMBIGGENED iPhone 6 screen
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip
- OK, we get the message, Microsoft: Windows Defender splats 1000s of WinXP, Server 2k3 PCs