Pesky acceleration appliances are proliferating and saying they can get better I/O out of storage arrays than the arrays themselves. We can imagine the array vendors sighing and saying: "Here we go again. When will these cachers ever learn?" The pitch is that store arrays are being overwhelmed by virtualised servers and there is …
Why just reads?
It befuddles me that people really think read caching is the most important solution. What about writes? Read caching totally depends on the application and is a bit of luck - if the data is there when the app needs it, great. If not and the data is either not cached or ejected early from the cache, you just wasted an i/o.
Writes are the big problem. The appliance guys like Avere and Alacritech and Kaminario try to address both the read and write problem. obviously they're not cheap but can be the right solution (for example if you're a large nfs shop with a hpc farm, you may evaluate avere).
Also why is this Project Lightning big news? Marvell introduced their DragonFly nvram/ssd card for serverside caching at SNW. It write caches the random writes on the application/db servers, which is the real problem. And it's a PCIe card for existing servers - i'm looking forward to trying this out as general purpose (not locked into EMC software, yuck).
- Opportunity selfie: Martian winds have given the spunky ol' rover a spring cleaning
- Spanish village called 'Kill the Jews' mulls rebranding exercise
- NASA finds first Earth-sized planet in a habitable zone around star
- New Facebook phone app allows you to stalk your mates
- Battle of the Linux clouds! Linode DOUBLES RAM to take on Digital Ocean