Fusion-io has crammed eight ioDrive flash modules on one PCIe card to give servers 10TB of app-accelerating flash. This follows on from its second generation ioDrives: PCIe-connected flash cards using single level cell and multi-level cell flash to provide from 400GB to 2.4TB of flash memory, which can be used by applications to …
Very interested to check this out. Looks better technology than expensive VMAXs
Partnership with APC?
Let's see, you try to park a car into a modern data center? Does it ship with an APC power generator?
I wouldn't quite go that far.
@AC: You're right in that this is very interesting technology, fast, seemingly capacious and a damn sight cheaper, however, that's not the reason why businesses choose Tier 1 arrays, such as HDS USP/VSP and EMC DMX/VMAX's or Tier 2 arrays like AMS, VNX, FAS, EVA etc.
The Fusion-IO Octal card as it stands is a FAST and solid card, but is dependent on other means to achieve redundancy - In card there's no replication, multi-pathing, limited redundancy, QoS, failover, reporting.
Additionally, PCIe cards, like individual flash drives tend to be a 1 trick pony - not being able to share their high speed goodness with other devices easily and achieve the same performance.
These cards and other like it are very fast and simple to implement, considerably more so than a SAN, however there's a place and situation for both.
You could of course "roll yer own" solution to achieve similar results, but remember, companies like EMC, NetApp and HDS spend millions if not billions on R&D and support to achieve the levels of protection that you'd not get with a "roll yer own" solution.
Plus, when you look further into it, as a card, it's limited in capacity (I know it's 10TB and you could achieve 20TB in 2u and 40TB in 4u or more as per their release) but beyond that if you need 100TB+ of the stuff, you're going to run into problems. (Yup, there are businesses and groups out there that need that much and more.) - That's where big SAN's come in.
Cheaper than VMAX (or any decent storage array for that matter)? Definitely
Faster? Quite possibly.
Better technology? Not by a long shot.
I was at SC11 this week and they were showing a cluster of servers booting from one of these and they were all accessing it as a shared storage target. When I asked about capacity they said disks are great for archiving data. Their position was to keep your hot data in their flash memory as a hot cache and then use their caching and network software to migrate it to a back end storage for archiving. Sounds a lot like Project Lightning. So I assume EMC agrees with this strategy. I agree too. It's not a bad way to go but I would want to play with one before I give any real opinion. If it does work like they say it does then it is interesting and definitely better than the closed system approach being pushed by the crowd of flash appliance vendors popping up these days. The last thing any of us need is yet another box to manage.
- Mounties always get their man: Heartbleed 'hacker', 19, CUFFED
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Feast your PUNY eyes on highest resolution phone display EVER
- AMD demos 'Berlin' Opteron, world's first heterogeneous system architecture server chip