Feeds

back to article NetApp's Cloud Czar predicts the death of VMAX

Blocks and Files NetApp's CTO of cloud is predicting the death of EMC's VMAX and other tier 1 storage arrays. What will kick them off the data centre stage? Flash arrays and storage-class memory, apparently. Val Bercovici, NetApp's "Big Data and Cloud Czar", sees basically two storage tiers in the now not-so-distant future: …

COMMENTS

This topic is closed for new posts.
Anonymous Coward

The bears are coming . . .

"NetApp is getting trashed in its very NAS heartland by EMC."

This. EMC may be the storage company people love to hate, but they are moving quickly given their size, while NetApp appears to be flailing around, pushing out incremental updates to Data OnTAP and presenting no concise vision of their future plans.

4
1
Thumb Up

Val Bercovici

Thanks for picking up my blog Chris.

NetApp's Virtual Storage Tier (VST) architecture spans a continuum from the (solid state or spinning) disk through to the array (cache) all the way up to the server hosting apps in question.

Today the Goldilocks scenario reported by Andrew Nowinski from Piper Jaffray is meant to emphasize that DataONTAP 8.1 cmode is the "just right" Unified Scale-out storage array for the sweet spot of the Shared Virtual Infrastructure market, which also happens to be at the center of our VST architecture.

We will also soon fill out the edges of the VST continuum with real-time, granular, self-managed and de-duped tiering at the disk and server/host layers. Stay tuned to my blog for further updates later this summer! :)

-Val.

1
2
Facepalm

Well done NetApp

You're announcing a paradigm that EMC seem to...have had for a while!

http://www.theregister.co.uk/2011/07/14/emc_vmaxe/

http://www.emc.com/collateral/hardware/white-papers/h8125-storage-tiering-sql-server-vmax-wp.pdf

I'm not a storage guru, so feel free to enlighten me on how Goldilocks is "new" compared to "fully-automated storage tiering (FAST VP)"?

1
1
Anonymous Coward

Re: Well done NetApp

Because Goldilocks is NetApp and FAST is EMC, therfore etc. etc.

1
1
Anonymous Coward

Re: Well done NetApp

Without seeing all the details I think the difference is probably that NetApp's solution doesn't do any of the "automated migration" stuff that EMC and other vendors promote. It's all about using cache to optimize performance based on "real time" workload rather than disk tier.

What Val is saying makes sense but it's more of an industry trend than a unique selling point for NetApp. Flash is a performance enhancer. Flash is getting deployed in hosts, on the fabric, in controllers, in disk groups and as the entire back end. EMC is doing it, NetApp is doing it, others are sure to as well.

The upshot of this is that performance is no longer bound to your processors and disks, so the need for super fast high-end systems like VMAX is alleviated and the differentiating factor between storage systems now becomes other features they support, such as storage efficiency.

1
0
Anonymous Coward

Re: Well done NetApp

The thing is that VMAX is a cache array, everything is cached in and out of the array by seriously smart algorithms through a gajiggabyte of cache. The cache is augmented by the FAST behavior, which IIRC is intelligent enough to per-emptively move tracks up and down tiers, even over a week to a month cycle. (I last worked on a VMAX project about a year ago, so may be hazy)

I think I'd rather use the VMAX for general virtualised disk requirements and a NetApp array for filer type work. As I recall, EMC's Celerra is turd.

0
0
Anonymous Coward

Please NetApp

Remember when NetApp used to be a nice company before they decided to out EMC, EMC.

NetApp was, and is, a one trick pony. They made a really slick filer with a solid, if costly, software head. Now they are trying to come up with the next OnTap as the other storage vendors have caught up with their dedup, compression and in some case surpassed their once unique snap functions.

I agree that tier one will not need to be used as often, but I don't see how that equals NetApp's advantage. NetApp is just a different variety of proprietary storage. If anything people will begin moving to the Hadoop style clusters of storage which run on a massive parallel file system and cheap disk, not a bunch of really expensive filers with tons of proprietary software and really expensive Flash. NetApp is basically saying the same thing that EMC is saying.

Also, mainframe, the FICON connection, is not "moribund" at all. It is growing if you follow the numbers, which makes sense as private cloud solutions are basically a roll your own mainframe. It is smaller than distributed systems, but that is not anything new.

0
0
Anonymous Coward

Lets look at what potentially the best NetApp customer in the world is doing....

Google, with their PBs upon PBs of file data, doesn't seem to be in any hurry to place an order for an uber proprietary OnTap environment. Why? It would cost them 100x as much as their Hadoop architecture. How much Flash does Google use? None, but you will notice their websites are pretty ridiculously high performers. Instead of using stupid, but high performing hardware (Flash), they have decided to just write a better parallel IO algorithm which breaks the IO up amongst many nodes and processes it in parallel. You can get really high performance on commodity gear if you use them all together on their micro-portions on the IO request.

If anything the "cloud" hoses NetApp because large scale public cloud providers are not going to shell out NetApp dollars for their PBs of capacity... nor are they going to be buying VMAX. NetApp and VMAX are two shades of the same color. EMC is not the wave of the future, NetApp is not the wave of the future.... commodity clusters are the wave of the future.

0
0
Anonymous Coward

Re: Lets look at what potentially the best NetApp customer in the world is doing....

In my opinion Google isn't really a good example here. As far as we (I) know, Google use individual low spec disks in individual servers. The servers are pulled out and replaced, if anything goes wrong. While this makes for fast replacement of bad hardware, it doesn't make for optimal use of resources or energy. Particularly energy. Normally if a disk fails in a large array, you pull the disk out and replace the disk. If a battery in the UPS fails, you pull out the battery and replace it. If a memory module in a server fails, you pull it out and replace it (often with the server still running, these days.)

If something at google fails, beit a battery, disk, module, network interconnect, etc, the entire server is pulled out and replaced. It is very wasteful.

It's also worth pointing out that Google can do this because they can dynamically move workloads around, very quickly. The vast majority of Enterprise class software can be clustered and that's it. You fix the broken element in the cluster, but you share as much as you can, this is more efficient for energy and therefore cooling.

0
0
Anonymous Coward

Re: Lets look at what potentially the best NetApp customer in the world is doing....

Yes, I think you are right. Google has 10,000s of nodes, so if a node goes down they replace the entire server and figure out why it went down after the fact. I assume they don't throw away the entire server but handle break/fix depot as a separate process. It is wasteful in the sense the hardware sense, but not wasteful in the staffing sense. Even if they had PFAs on the servers, it would be such a huge manual effort to fix the server in place instead of just replacing the node at their crazy scale. It could be done more efficiently. The largest problem with Google is the low utilization rates on their servers. I have always thought that Google IO intensive workloads would be well suited to a few rows of large mainframes with Linux. There are definitely problems with the Google model, but I think that is definitely the "cloud" model. Not the EMC/NetApp (as much as NetApp slams EMC, they are basically following the same strategy) approach of high cost hardware, high cost software. NetApp is only bashing the tier one providers, the "big three" of EMC, IBM and HDS, because they can't compete in scale up storage. They have a filer which they have rigged into a SAN, not a SAN which also handles NFS.

All of these proprietary storage vendors, NetApp and EMC, like to talk about "cloud" and bash the other for not being a "cloud" solution. Yet none of the "cloud" companies use anything resembling their model. In the "cloud", neither VMAX or NetApp exist.

1
0

Netapp is the next RIM

0
1
Anonymous Coward

Seriously, is this sponsored by the Department of the Obvious Department?

"NetApp's Cloud Czar predicts the death of VMAX"

I rank that one up there with "Our initial assessment is that they will all die" from the Iraqi Information Minister during the invasion. From Baghdad Bob to Valley Val.

2
1
Stop

Read before you post

The amount of Anonymous cowtard ignorance on this thread is impressive - even for ElReg commenters! I know Chris likes to get eyeballs by generating controversy, but if you bother to actually read Val's original blog, you'll indeed notice this is not about NetApp vs. EMC.

It's about the potential for flash to disrupt them both and Val's pitch that VST is a consistent approach for NetApp customers to protect existing investments as technology moves forward. It should be clear to the reader that EMC employs a completely alternative portfolio approach to address the same disruption, with seemingly a lot less investment protection for existing customers.

While it's premature to predict which approach will win in the end, JJM has posted a very nice analysis here cutting through the clutter to point out what's really happening in the trenches. Good on him!

<a href="http://www.storagenewsletter.com/news/financial/netapp-fiscal-4q12-financial-results">StorageNewsletter Analysis of NetApp vs EMC storage-only revenues (see comments section)</a>

0
0
This topic is closed for new posts.