Thanks for the commentary and perspective. I am the Enterprise Account Manager for WhipTail Tech in North America and hope to provide some answers for your genuine concerns.
Regarding the article itself, the fact is that there is no doubt we can deliver on this. The University of Groningen in The Netherlands, as one example, is running 6,000 VDI users on a single appliance as we speak. Their current motion is adding a few more boxes to scale to 18,000 users. Three appliances, 6U, 540 watts of power for 18,000 VDI users. It is literally that simple. It's all math.
Regarding cost analysis, it's very simple. Go to you preferred storage platform vendor and search for their scaling documents for 1,000, 5,000, 10,000 VDI users and get MSRP pricing. Then research their power requirements to calculate power and cooling cost. Then factor in the lifetime TCO of a data center rack ($120,000 according to APC) and apply that to how many racks of data center space are required for their solution. Out of all the scaling documents we read and priced, the absolute BEST three year TCO for a 5,000 user VDI environment was over $1.5 million dollars. Ours is under $190,000 - much less for smaller enviroments. Math.
That TCO/ROI value proposition has nothing to do with inline data deduplication and compression. That TCO/ROI is based soley on the core performance and efficiency value proposition of our management of the Racerunner SSD array.
Data deduplication and compression is an advanced feature that can be turned on or off, per LUN, based on what's best for your environment. We see the best value of that feature around virtual server environments where we can deduplicate significantly, while improving performance 200%. It can also be handy in certain database enviroments where 4:1 compression can be achieved.
Our 7.5 TB appliance is actually a hybrid of 1.5 TB of SSD and 6 TB of SATA drives, all in a 2U appliance. This is a great play for a SMB shop where they dedicate the 1.5 TB of "fast" storage to Exchange, VDI, Virtual Servers, key databases, etc, and the "slow" to file shares and other slow workloads. Then we deduplicate the storage to the "slow" workloads so that an SMB can have a 2U SAN that will scale significantly with the business. This same appliance can also be used by the large enterprise to carve up fast and slow worloads within a specific workload to maximize the minimal data center and environmental impact.
Regarding larger capacity SSD only appliances, we are capable of deliverying 9-12 TB of SSD in the same 2U chassis, but we are working on your behalf to confirm that those size drives are ready for the enterprise.
Our conservative, "over the fabric", production numbers for IOPS are 150,000 read and 100,000 write. Sustained, random, 4k block size. At 20 IOPS per user (again, see scaling documents from Citrix, VMware, etc. - they are saying AT LEAST 20), that's 150,000/20 IOPS = 7,500 users per appliance. Scalability from a capacity perspective is limited by each customer's approach to provisioning, cloning, golden images, etc. Obviously the University of Groningen excelled in this department to enable 6,000 users per appliance. Again, math.
Regarding Commment #1, caching is a typical response to the latency and overall mechanical limitations of HDD. It helps, but it's just another frankestein addition to the core problem that adds cost, complexity, energy consumption, etc. The mathematical problem with the caching approach is that VDI workloads are 50% - 90% WRITE - we solve that at the disk level, which takes us back to why have racks of equipment when you can do it in 2U? I would suggest asking your preferred platform storage vendor for a 6,000 user VDI environment reference so you can ask how many racks of storage it took, how much it cost to procure, and how much it cost to maintain. That is, if they can even find a reference.
Regarding Comment #2, I would question if the DMX or VMAX can even deliver on a 6,000 environment without crashing, without you allocating racks of storage, and without dedicating a team of people to focus solely on replacing failed hard drives. But even if they could actually make it work, why would you spend 10X as much with 100X the power, cooling, and rack space? So, to answer your question of "why invest in a separate console, vendor, etc," the answer is to save the environment and millions of dollars.
Regarding Comment #3, back to the 50% - 90% write workload.
Thank you again for your comments. I can be reached at [email protected] or 615-337-0883.
Facts are stubborn things. This is a mathematical conversation. It really IS this simple - it's just up to each of you to accept it. Welcome to the new world order.
I look forward to further discussion.