5 posts • joined 13 May 2010
These VDI storage articles get better each time. Well done! Yep, WhipTail can be that "black box SSD storage array" for the high IO Operating Systems portion of the storage that will save users about $500k for every 1,000 users on the overall project. For around $30/user.
Then simply put the rest onto your current storage platform and you're done!
Hilarious! "The infamous write cliff?" What are they talking about - consumer SSD drives from three years ago? It's such an infamous thing that they just made it up?
Maybe other people have that problem but we certainly don't. Off the top of my head I can think of one my customers pushing 2TB/day through a 1.5TB appliance with no "write cliff".
I swear, the bigger they are, the DUMBER they are. How any of these multi-billion dollar IT firms have any credibility is beyond me...
WhipTail Racerunner Inline Datadeduplication & Compression In Prod for Months
With all of the commenting that I've done in response to your postings over the past few months, I'm shocked we weren't mentioned in this article ;-)
As mentioned in the above comment, WhipTail's patent pending system aggregation technology enables us to partner with Exar to bring inline data deduplication AND compression to primary storage for SSD and HDD TODAY.
A great customer example is being used behind a server virtualization environment for a Fortune 500 insurance company. With the 150,000 IOPS, 0.1 ms latency, and 1.7 GB of throughput that the Racerunner provides, there is plenty of performance room to turn dedupe/compression on and SERIOUSLY consolidating the storage footprint for them while STILL speeding things up 2X - 3X.
We are also seeing 4:1 reduction ratios in database environments and 4:1 in NAS environments.
Once again, in a vast realm of conjecture and hearsay, WhipTail is delivering results in real, production environments.
Please contact me with any questions.
Regional Sales Manager
WhipTail - MLC From Day One
Great article. To clarify, WhipTail has been MLC-based since day one. We are the only MLC based enterprise storage vendor in production at customer sites for over 18 months and counting.
Along with proven performance metrics in dozens of customer sites across several use cases (including VDI, Server Virtualization, OLTP, Messaging, and Database), WhipTail is the first and only vendor to offer inline data deduplication & compression of primary storage - dramatically changing the ROI.
WhipTail is also the first flash storage vendor to offer a Hybrid model that includes 1.5 TB of MLC based flash and 6.0 TB of SATA in a 2U appliance - making "flash for IOPS and SATA for capacity" a reality for MSRP $54,000. Key differentiator - with inline data deduplication and compression, customers can now scale both flash and SATA with the business for a dramatically reduced TCO.
Please let me know if you have any questions!
All the best,
Thanks for the commentary and perspective. I am the Enterprise Account Manager for WhipTail Tech in North America and hope to provide some answers for your genuine concerns.
Regarding the article itself, the fact is that there is no doubt we can deliver on this. The University of Groningen in The Netherlands, as one example, is running 6,000 VDI users on a single appliance as we speak. Their current motion is adding a few more boxes to scale to 18,000 users. Three appliances, 6U, 540 watts of power for 18,000 VDI users. It is literally that simple. It's all math.
Regarding cost analysis, it's very simple. Go to you preferred storage platform vendor and search for their scaling documents for 1,000, 5,000, 10,000 VDI users and get MSRP pricing. Then research their power requirements to calculate power and cooling cost. Then factor in the lifetime TCO of a data center rack ($120,000 according to APC) and apply that to how many racks of data center space are required for their solution. Out of all the scaling documents we read and priced, the absolute BEST three year TCO for a 5,000 user VDI environment was over $1.5 million dollars. Ours is under $190,000 - much less for smaller enviroments. Math.
That TCO/ROI value proposition has nothing to do with inline data deduplication and compression. That TCO/ROI is based soley on the core performance and efficiency value proposition of our management of the Racerunner SSD array.
Data deduplication and compression is an advanced feature that can be turned on or off, per LUN, based on what's best for your environment. We see the best value of that feature around virtual server environments where we can deduplicate significantly, while improving performance 200%. It can also be handy in certain database enviroments where 4:1 compression can be achieved.
Our 7.5 TB appliance is actually a hybrid of 1.5 TB of SSD and 6 TB of SATA drives, all in a 2U appliance. This is a great play for a SMB shop where they dedicate the 1.5 TB of "fast" storage to Exchange, VDI, Virtual Servers, key databases, etc, and the "slow" to file shares and other slow workloads. Then we deduplicate the storage to the "slow" workloads so that an SMB can have a 2U SAN that will scale significantly with the business. This same appliance can also be used by the large enterprise to carve up fast and slow worloads within a specific workload to maximize the minimal data center and environmental impact.
Regarding larger capacity SSD only appliances, we are capable of deliverying 9-12 TB of SSD in the same 2U chassis, but we are working on your behalf to confirm that those size drives are ready for the enterprise.
Our conservative, "over the fabric", production numbers for IOPS are 150,000 read and 100,000 write. Sustained, random, 4k block size. At 20 IOPS per user (again, see scaling documents from Citrix, VMware, etc. - they are saying AT LEAST 20), that's 150,000/20 IOPS = 7,500 users per appliance. Scalability from a capacity perspective is limited by each customer's approach to provisioning, cloning, golden images, etc. Obviously the University of Groningen excelled in this department to enable 6,000 users per appliance. Again, math.
Regarding Commment #1, caching is a typical response to the latency and overall mechanical limitations of HDD. It helps, but it's just another frankestein addition to the core problem that adds cost, complexity, energy consumption, etc. The mathematical problem with the caching approach is that VDI workloads are 50% - 90% WRITE - we solve that at the disk level, which takes us back to why have racks of equipment when you can do it in 2U? I would suggest asking your preferred platform storage vendor for a 6,000 user VDI environment reference so you can ask how many racks of storage it took, how much it cost to procure, and how much it cost to maintain. That is, if they can even find a reference.
Regarding Comment #2, I would question if the DMX or VMAX can even deliver on a 6,000 environment without crashing, without you allocating racks of storage, and without dedicating a team of people to focus solely on replacing failed hard drives. But even if they could actually make it work, why would you spend 10X as much with 100X the power, cooling, and rack space? So, to answer your question of "why invest in a separate console, vendor, etc," the answer is to save the environment and millions of dollars.
Regarding Comment #3, back to the 50% - 90% write workload.
Thank you again for your comments. I can be reached at firstname.lastname@example.org or 615-337-0883.
Facts are stubborn things. This is a mathematical conversation. It really IS this simple - it's just up to each of you to accept it. Welcome to the new world order.
I look forward to further discussion.
- Batten down the hatches, Ubuntu 14.04 LTS due in TWO DAYS
- Samsung Galaxy S5 fingerprint scanner hacked in just 4 DAYS
- Did a date calculation bug just cost hard-up Co-op Bank £110m?
- Feast your PUNY eyes on highest resolution phone display EVER
- Wall St's DROOLING as Twitter GULPS DOWN analytics firm Gnip