How can NetApp respond to Tintri?
This topic was created by Chris Mellor 1 .
How can NetApp respond to Tintri?
With one customer running 800 VMs off one 3U Tintri box and another buying $300K of Tintri rather than $1.25 million of NetApp storage to run VMs what can NetApp do to stop a mass migration of VMs off its FAS arrays to Tintri VMstores?
Re: How can NetApp respond to Tintri?
Simple - NetApp buy Tintri.
Isn't that the way most large corporates innovate these days?
Buy Tintri, I'm sure the VC's will sell up.
Looking in to the devices further, they dedupe and inline compress blocks written to flash. When storing large number of VMs that are similar this will have huge storage savings, it also should have a huge read performance benefit as a common small subset can be cached in memory. It appears to write all data to flash first then later moves less used blocks to the spinning disks.
What can NetAPP do? They can stop f*ing their clients with overpriced kit that's poorly configured to do the job. Copying Tinrti's device is one option, but will they be able to compete adding $.29 to each dollar the product costs (based upon 2011 earnings on revenue). They can buy the Tintri and jack the price up, but the pricing cat is out of the bag and clients will put pressure on Netapp to reduce or renegotiate prices. It boils down to how much of Netapps market is servicing VM's. If it's sizable you might want to reevaluate that stock portfolio.
Improve the breed - competition will make things better for everyone. If there are too many MBAs running the ship, though, this may prove to be impossible...they will even find a way to botch the acquisition of Tintri,.
what are those 800 VMs doing ?
I looked at the I/O profile of my little VM environment (~150 VMs) which hosts a production and pre production e-commerce site(LAMP), and for the production side average IOPS is 5 IOPS/VM for the VMFS file stores (databases use RDM). Non production side which is a 3rd of the VMs in the cluster at this point in time average is 2.2 IOPS/VM (both numbers average over 30 days). In my experience this is fairly typical per VM I/O(going back across about a dozen clusters running web site infrastructure services over the years). Bulk unstructured data would be housed on a dedicated NAS cluster of sorts(and accessed via in-guest NFS), and structured data goes to RDMs(with I/O measured separately - not included in above numbers). As a result per-VM I/O remains low - basically log files and stuff, with more I/O intensive things on RDMs to leverage storage-based snapshot technology (primarily - also makes it easier to manage space and monitor I/O).
So with those numbers 800 VMs is not hard to accomplish on small storage. Though we chose bigger/better storage primarily for the availability and maturity of the platform. The array itself has been shipping longer than Tintri has been in existence. If we were a bigger company and had more side projects to test stuff out on it would be interesting to see Tintri and others in practice.
I suppose if we built things to run EVERYTHING inside vmware data stores tintri may be advantageous there but I really don't see myself ever doing that, any more than I would see myself building physical servers that ran everything off internal disk drives. For one thing the vmware-based snapshots really are quite limited(and slow!) - I take a vmware snapshot on average maybe once a month. I also like maintaining compatibility with physical infrastructure - knowing I can spin up a physical server, swing an RDM over onto it and keep going. Or share a NAS mount point between a VM and a physical box. I'm not in the camp(and never have been) that believes everything must be a VM.
Production VMFS workload is 96% write, non production workload is 87% write for these particular VMs.
800 VMs - no big deal
It all depends on how good your virtualization solution is. If you are using one of the bloated, lumbering behemoths of vurtialization tech that rob you of 40% of your bare metal performance (VMware, Xen and KVM are all in the same boat here), then maybe you need a big expensive box.
Meanwhile, the sensible and efficient solution like Linux Vserver with it's hashify feature means you can have 800 similar VMs running on a single _HOST_ in practically the same amout of storage as a single VM would take up (wonders of copy-on-write hard-links) with memory deduplication implicitly thrown in for free. Try that on one of the big three mentioned above and watch your performance fall off a cliff (after the 40% drop cliff you fell off before you even started).
Re: 800 VMs - no big deal
Except that the bloated lumbering behemoths actually let you run Windows along with countless other OSs which Linux Vserver does not, which is essential for a lot of enterprises, mine included.
I'll take the performance hit (which is not 40% in my experience) on the chin thanks.
So who is the client, Siemens, Alstrom or similar? (this is wild guess). Pint, as well, it's Friday.
Re: 800 VMs - no big deal
> I'll take the performance hit (which is not 40% in my experience) on the chin thanks.
That is your choice - and in some cases it is acceptable (or even doesn't matter, e.g. prototyping). But for large scale service provision, 40% increase on your hardware capital expenditure isn't going to be trivial. And the fact that you have to run 7 servers for every 5 servers of capacity might also add up to administration overhead that wipes out the savings you gain from flexibility.
I suspect you haven't tested your setup to saturation point (I presume you tested it reasonably scientifically, otherwise by "experience" you must mean "opinion"). At full saturation load (e.g. pallel compiling, MySQL, and similar tasks where you hammer the machine with about 2x the number of hardware CPU threads your host has) vmware/xen/kvm overhead robs you of about 40% (well, OK, 35-55% depending on the exact task) of what you'd get out of bare metal - and that is without memory deduplication and with only running one VM on a host. More VMs means more task switching, worse cache hit rates and more random disk I/O pattern, so if you do that the performance hit will be considerably greater.
But don't take my word for it - run your own tests with your own applications to establish the impact on the peak sustained throughput.
Your numbers are pure fiction and your understanding of what people want to achieve with virtualisation is deeply flawed.
> "Your numbers are pure fiction and your understanding of what people want to achieve with virtualisation is deeply flawed."
I ran my benchmarks to reach those numbers. Have you? Or have you just swallowed the virtualization vendor's merketing rhetoric wholesale as gospel?
Virtualization has it's place - I am not disputing that. The problem is that most entities (corporate or otherwise) are using it for purposes it is woefully ill suited for - and then complain that their system is slow and realize that their architecture doesn't scale horizontally as well as they had hoped, either.
Actually, I've done extensive work on VMware at a major UK financial, where I was responsible for eking out every last drop of performance from a system (with particular focus on IO) and making the decision on V or P.
40% of system resources is clearly rubbish and basically meaningless even if it were true because of the different impact of running a hypervisor on different elements of the system.
I'll believe it when I see the details of what benchmarking you did and how you did it (and will point out the flaws in it). I gave a basic outline of the sort of workloads that demonstrate a 40% deterioration, and it holds similarly across the board, regardless of whether you are CPU, memory or I/O bound (the deterioration is slightly worse if you are I/O bound).
Then again, I accept the theoretical possibility that your workload may be weird and by magic or weird coincidence suffers a smaller penalty. But for large scale parallel compiling and very highly tuned MySQL workloads (by this I mean things like very carefully designed indexes and enough memory that all indexes are in RAM), the performance drop is about 40%.
And since we are throwing round opaque $company references, my MySQL tests and tuning showing the 40% drop were done on the systems of a major UK recruitment/HR company.
eggs in one basket
800 machines dependant on a single piece of hardware, sounds great for business continuity even if there is failover!
Re: eggs in one basket
It depends upon that piece of hardware, if it were a vMAX array with proper redundant FC interconnects, I'd be happy (assuming cross-site replication, in case of total disaster.) Were it a random NAS box with NFS shares and RAID 5+1, not so happy.
Extract NetApp SW from price comparisons
A point emailed to me:
Yeah but that Tintri price vs. NetApp price doesn't include all the NetApp Software stack - you should remove it and then price it.
Don't get me wrong - NetApp needs a mixed aggregate - they think a mixed aggregate is their Flash (PAM card) and the disks - but a truly Hybrid Array like Tintri is no comparison to a Array like a FAS because 50-60% of the cost is the Software & support.
Like to see the apples to apples vs. the apples to watermelons.
No Rocket Science Present
So the article breaks no new ground, discusses a product that breaks no news ground and solves a problem that we've been solving for a while.
So what we've established here are the following:
1. 800 VM's is nothing to write home about. I've configured dozens of environments for customers with this number (and more) of VM's
2. NFS, Fibre-Channel or iSCSI. It doesn't matter. They are just interconnects. I prefer iSCSI because of simplicity and total cost of operation (TCO).
3. Tintri offers nothing new. It's just a hybrid box. There are multiple products on the market. Anyone seen the Equallogic XVS product or the Nimble CS series? Performance comes from the SSD and the capacity from the SATA. In the case of Equalloigc, it can be expanded out based on capacity or performance by adding additional arrays - very slick.
4. The big boys overcharging? Was this NetApps or EMC (Celerra - NFS)? Nothing new here either.
So what did I miss?
Sent to me by mail:
I think there might be an error in the cost in the article regarding Tintri. If they are using 12 rack units or 12u than it should be $75k per device. Which would make more sense since I believe the retail is $90k. Thank you for sharing the article,
Box performs well
Sent to me, and posted anonymously just in case ...
Since each Tintri 540 is 3U high and they have 12U of them then that works out to $75K each not $25K each.
That would fit in nicely with the $80K we paid for our single Tintri 540.
We are running 130 VMs on it so far and have not even come close to stressing it out. It shows 50% available IOPS capacity still.
NetApp does this on top of just stuffing virtual machines with storage:
- replicating data around on their gear, small to medium to large boxes and vice-versa > consolidation, intercontinental data protection - multiple topologies, old to new boxes and vv
- doing hundreds of snapshots on all data > uncomparable RPO, massive testing and development environments setup quick means excellent time to market, top sla's
- doing metrocluster transparent site failover in case of hardware component error > uptime increased
- having integration software for snapshots/clones for many major business apps > ease of use
- having some thousand engineers worldwide to provide worldwide 24/7 support for all class of business
- virtualizing other vendor storage to integrate it into new netapp feature ecosystem > investment protection
- delivering block and file storage from one system with the complete feature set below it (snaps, dedup, compression, worm)in all system families >> true unified storage, nobody else has it
btw, who were this tintri-guys?
Dear Wrong Aspect,
Nice. You delineate the enterprise-class software features that mark NetApp out as a very strong company. I don't know if that will be enough to stem the incoming tide of cheaper VDI systems.
Estimated cost of storage per VM Instance:-
Tintri - $23
Dell EquaLogic -PS6010V - $338
HP P4000 with flash - P4900 - $233
I think NetApp will be up there in the $200 - $350 class, if not higher. That's a large cost disadvantage.
- 'Windows 9' LEAK: Microsoft's playing catchup with Linux
- Review A SCORCHIO fatboy SSD: Samsung SSD850 PRO 3D V-NAND
- Was Earth once covered in HELLFIRE? No – more like a wet Sunday night in Iceland
- Breaking Fad 4K-ing excellent TV is on its way ... in its own sweet time, natch
- Every billionaire needs a PANZER TANK, right? STOP THERE, Paul Allen