back to article Server flash cache deathmatch: Bring it

Servers currently get flash caches as a branded supplier's retrofit or through special deals between a server supplier and an OEM source, like Fusion-io. This can't go on and server flash cache is going to become a standard fit item. Servers suffering disk storage I/O bottlenecking can now use PCI-e connected NAND caches as a …

COMMENTS

This topic is closed for new posts.
Devil

You are missing the point

At this point in time NAND cache integration and 2-tier storage integration into a plain OS server requires more than drivers. You need specialized software to make really good use of it.

This is a 3 player game, not a 2 player game. In fact, it is probably more important which flash supplier gets promptly in bed with CA, BMC and their like here and not the server vendor. The only server vendors capable of playings both sides here are HP and SnOracle.

This is likely to be the case for at least 2-3 more years until MSFT and Linux pick up the slack and make this a properly working standard feature in their base OS-es (by the way - MSFT current implementation is anything but that). Only then, it will become a OEM-Server supplier game and even in that case there may be some space for the 3rd party. Until then Dell will grind its teeth and will most likely still ship the PCIe flash as "Fusion IO, Storage Vendor X certified".

1
0
Holmes

Sun (Oracle) have been doing this for nearly 2 years already

as title.

you can have SODIMM modules or a dedicated PCIe card

oh, and Solaris is the 'specialized software' that you need (i.e. ZFS), which has had a Level 2 disk Cache for, oh, about 3 years now... I remember reading that there was code in the works to make it persistent across reboots as well.

But I suppose now that you have the Oracle-Tax for Solaris it would probably count as specialized software...

Shows really how far ahead of the game Sun were... Shame they had to get eaten...

0
0

correction

sorry - you *could* have SODIMM modules - they called them FMODs but now withdrawn in favour of PCIe cards

0
0
Anonymous Coward

Further correction

The special module or PCIe card requirement is a fake requirement added on by Oracle after the buyout. Of course, only Oracle provided modules are 'supported'.

ZFS could actually use any block device on the system as as a L2 cache/zil cache allowing anyone to add in a regular SSD and setup their own cache. Not anymore, unless you use OpenIndiana or migrate to bsd or something (good luck with getting that past the boss..).

1
0
Boffin

EMC & Flash on the host

I think it's much simpler than I've seen people speculate so far. I think it will be as simple as enabling the functionality via a key in their powerpath product. Powerpath already can be used to move data from lun to lun on a host, it seems like it would be extremely easy to extend that to local SSD.

If they use that methodology, I'd guess that the write to SSD & array is very low risk as it's more of the same: a write comes in gets intercepted by powerpath and it writes to both the array and the local SSD (just like doing a powerpath lun migration so nothing really new here). No real risk of dataloss as they've been doing split writes via powerpath for a while. There would have to be some new bitmap matching intelligence but I can't see it be that difficult. Throw out the oldest accessed block on the SSD and write new data to there update bitmap. A read would simply look at the bitmap: is the block in the map? If not request data from array.

Doesn't seem like it'd take much time to get something like that going, of course that's my speculation... and EMC might be doing something completely different; but it sure feels like that would be a very simple, low-risk option that could be into customers without years of work. Be a very easy sell as well, you've already got powerpath installed on your system... all you need is a a license key and you can be offloading array reads to local drives, no reboots, no recertification, no drivers it's already there.

0
0
Anonymous Coward

Old news for NetApp admins

NetApp pioneered this in their arrays first with PAM about 3 years ago and now FlashCache. Because they do it in the array, no host support is required, though they could just as easily support host based flash as well.

0
0
This topic is closed for new posts.

Forums