The final HP Regcast of our short series on how to build a private cloud covered the usual afterthought of any IT project: storage migration. Much of the conversation dealt with the problems of translating good intentions into action. The research conducted by Freeform Dynamics at the end of 2012 showed that those IT departments …
The problem . . .
. . . with sorting out one's storage is that the market is moving very quickly right now, making it difficult to choose a best-in-class solution. Storage is also stupidly expensive, and storage decisions are difficult and time-consuming to unwind, leading to a certain conservatism when it comes to deployment: if you make the wrong choice, you'll be stuck with it for years. Also, the tendency in the storage arena is to over-promise and under-deliver, so one must take with a grain of salt any vendor statistics, performance metrics, etc., and be sure to read the fine print.
The reason for all this is simple: storage is about the software, not the hardware. Any numpty, even Eadon, can throw a bunch of hard drives in a box. Designing and implementing a sufficiently robust architecture is more challenging but essentially a solved problem. Writing software which can efficiently and effectively use that hardware is much more challenging. Unfortunately, the hardware and software are usually packaged together and/or buying the software standalone is sufficiently expensive that changing platforms is an option undertaken only with a certain caution. On the other hand, most incumbent storage vendors charge so much for maintenance that, past a certain point, it's no more expensive to switch than it is to stay with one's current provider, hence the proliferation of storage start-ups.
"Many traditional disk-based architectures are unsuitable for cloud deployment because the I/O performance is constrained by the storage performance."
Why? Application usage drives Storage, CPU, and Network utilisation, not the technology that underpins it.
Admittedly there is more chance of a changing mix of I/O on a cloud environment but how different is that to a sudden (and possibly sustained) peak generated by that marketing campaign, that the marketing team didn't mention, affecting applications hosted outside of the cloud? Or the application release that generates twice the reads and writes?
It's all about (un)predictability, and a storage infrastructure that deals with that is a good thing, cloud or otehrwise.
Check out OrangeFS. They will have tiered storage soon which will let you add a tier of SSD. Right now it will still allow you to quickly and easily expand your storage tier by powering on a new server and copying over the config. Adding new storage capacity and performance under a global namespace takes minutes.