Re: block sizes matter
Hi all, Dimitris from Nimble, an HPE company.
@Vaughn - hey bud, transactional applications do transactional I/O in the 4/8K range, not 32K. http://bit.ly/2oUDNFI
Are you saying Pure arrays massively slow down when doing transactional 4/8K I/O, and are only fast for 32K? What happens at 256K or above, which is a typical I/O size for apps that aren't doing transactional things?
I can't comment on the validity of these specific results, but I'll say one thing: unless Infinidat supplies the exact testing parameters, it's hard to be sure the testing was performed correctly.
In general, when seeing any test, you need to know things like:
- number of servers used
- number of LUNs used
- number of fabric ports (and their speed)
- server models & exact specs, switch models, HBA models, host OS version, all host tunings that differ from the defaults
- benchmarking app used
- detailed I/O blend
- amount of "hot" data - even with a 200TB data set, if the amount of hot data is 1TB and that fits in RAM cache, the array will likely be fast. If you want to stress-test caching systems, my rule of thumb is: The amount of hot data should exceed the amount of the largest RAM cache by 5x.
A couple of vendor-agnostic articles that may help: