Violin Memory might well become the next platform storage company. But to do that it needs three product technologies and it only has one at the moment: its shared all-flash Memory array. The context to this is that primary data storage for latency-sensitive and IOPS-sensitive applications is moving away from spinning disk to …
NetApp seem poised to get their lunch eaten. They have not figured out a compelling flash implementation (a shelf of NetApp flash will buy you an entire Nimbus storage array), and they are rapidly falling behind in terms of product scalability, support, reliability, and speed. I would not be surprised to see them up for acquisition by another major player in the next few years.
certainly is a reason to put in memory on a network link
In memory data even over a network link is still much faster (and probably "fast enough" for the vast majority of workloads out there - same goes for SSD over a network link). The main drawback for SSD in traditional enterprise arrays is the cost is really high and the controllers aren't prepared to scale that well with them. If you look at some of the biggest baddest enterprise arrays being able to squeeze out 200-500k IOPS with tons of CPUs, cache etc. When compared to what SSDs can do on their own, there is a large mismatch of scale.
Not only that but take 3PAR since I know that technology very well, the P10000 specs (which is rated for 450k SPC-1 IOPS) say max of 8 SSDs per disk enclosure. So it's not as if you can take a disk enclosure and slam it full of 40 SSDs and off you go - if you want those 40 SSDs your talking about 5 x 4U disk enclosures - 20U of space mind you - half a rack, and at least in the past 3PAR has been anal about their power configuration, if they still are in this scenario your looking at 2x208V 30A circuits to drive those 40 SSDs. Of course they won't draw that - I'm quite certain in fact that the disk chassis themselves will draw significantly more power than the SSDs that they house(especially given there is only 8 per chassis). But most customers pay for power by the circuit rather than utilization. And guess what - you can "only" get 12 disk shelves on a pair of controllers on the P10000(it is, after all 24 FC links to connect 12 chassis). Of course you can put SATA or FC drives in those chassis along side the SSDs, just using this as an example of a SSD-only type of solution to see how crazy it can be.
So it's pretty easy to see why large scale SSD use in a typical enterprise storage array is not nearly as cost effective as something designed from the ground up to be SSD like say a Violin box. Though from what I have heard is Violin lacks a lot of software, it's mostly just a dumb fast storage system, which I'm sure works well for certain use cases too.
But don't think that it's a stupid idea to keep things like in memory and SSDs from being networked just because of latency. Networks add very little latency - especially when your comparing IOPS of memory and SSD to traditional spinning rust(which is often traveling over a very similar network anyways).
I am missing someone...
what about TMS????.
they have an all flsah storage system even before Violin had its own, they have different SSD PCIe adapters even before EMC "invented" them, and still no mention about their offering in your article.
Actually, you can use one of TMS boxes right behind a V-series from NetApp.
@ the other Anonymous Coward
Because TMS behind NetApp works so well -
Why no mention of TMS?
Why no mention of TMS in this story? A lapse.
Violin has dedupe? Really?
really? That's why Whiptail was eliminated.
Your style of independent investigation seems off .... too many "...I have been told"..
Where is the old Chris?
- Review Apple takes blade to 13-inch MacBook Pro with Retina display
- Munich considers dumping Linux for ... GULP ... Windows!
- Game Theory The agony and ecstasy of SteamOS: WHERE ARE MY GAMES?
- Intel's Raspberry Pi rival Galileo can now run Windows
- Microsoft and HTC are M8s again: New One mobe sports WinPhone