Pesky acceleration appliances are proliferating and saying they can get better I/O out of storage arrays than the arrays themselves. We can imagine the array vendors sighing and saying: "Here we go again. When will these cachers ever learn?" The pitch is that store arrays are being overwhelmed by virtualised servers and there is …
Why just reads?
It befuddles me that people really think read caching is the most important solution. What about writes? Read caching totally depends on the application and is a bit of luck - if the data is there when the app needs it, great. If not and the data is either not cached or ejected early from the cache, you just wasted an i/o.
Writes are the big problem. The appliance guys like Avere and Alacritech and Kaminario try to address both the read and write problem. obviously they're not cheap but can be the right solution (for example if you're a large nfs shop with a hpc farm, you may evaluate avere).
Also why is this Project Lightning big news? Marvell introduced their DragonFly nvram/ssd card for serverside caching at SNW. It write caches the random writes on the application/db servers, which is the real problem. And it's a PCIe card for existing servers - i'm looking forward to trying this out as general purpose (not locked into EMC software, yuck).
- SMASH the Bash bug! Apple and Red Hat scramble for patch batches
- BENDY iPhone 6, you say? Pah, warp claims are bent out of shape: Consumer Reports
- eXpat Files 'Could we please not have naked developers running around the office BEFORE 10pm?'
- NASA rover Curiosity drills HOLE in MARS 'GOLF COURSE'
- WHY did Sunday Mirror stoop to slurping selfies for smut sting?