IMHO ... this will look neater
In my opinion, this solution look neater if they execute correctly with the right software stack.
QLogic is adding flash storage to its server adapter cards so they become PCIe-connected flash caches, speeding up SAN I/O-bound applications in the servers with read I/O acceleration. The company makes a line of Fibre Channel Host Bus Adapters (HBAs) and Ethernet-based Converged Network Adapters (CNAs) that can run the FCoE …
have you ever used any of Q-Logics software tools? What makes anyone think they are capable of making a software stack to support this new feature set? Fusion-IO isn't a hardware company, its the software that matters and frankly I don't think Q-Logic has the chops to execute. Also, going up against EMC isn't a smart move unless you are priced significantly lower. And damn, those cards are going to run hot and eat a lot of power.
How the hell this would work in a shared storage cluster environment? I can't imagine it would be remotely possible.
I can see the fail now:
"We took a RAC cluster and unbeknownst to the DBAs, the server guys replaced the HBA with Folgers Crystals (errr QLogic Flash-Cache HBAs) and then suddenly massive data corruption ensued."
Obviously using these over a shared file system would be stupid. There is no way to cache a shared file system, except in the array itself. But in the typical virtual server environment, one server to one file system, it should work well. And I don't know why it would need software? Caching has been available on RAID cards for eons, and has always been available right in the hardware. This is the same thing.
As far as the QLogic software, why do you need to use it? Other than setting the WWN for boot-from-SAN, configurations, which I could also set in the BIOS, I never needed it.
Biting the hand that feeds IT © 1998–2019