* Posts by Val Bercovici

12 posts • joined 15 Jan 2009

Who ate all the flash pie: Samsung, 'course, but hang on... GOOGLE?

Val Bercovici
Go

valb@netapp.com

Disclosure - I work for NetApp.

Not sure how Gartner calculated their numbers, but as of mid-2013, NetApp has shipped over 50PB of FlashCache (formerly known as PAM II) alone. More impressive is that Real-time data (vs policy) driven auto-tiering technology is accelerating > 5 Exabytes of disk.

Once again viewing capacity shipped relative to revenue reveals much more about the storage industry and value delivered to customers!

0
0

NetApp's FlashRay to zap Symmetrix with fibre channel

Val Bercovici
Thumb Up

All-SSD Systems

Hi Op #2,

I woud always defer to your sales team regarding specific configuration recommendations. However, all-SDD FAS systems are supported and available with the appropriate disk shelf. Your SE would know whether they are recommended for a given workload vs the new EF540. Larger SSD's for FAS are coming soon as well, although I don't want to pre-empt any upcoming releases with specific pre-announcements here.

-Val.

0
0
Val Bercovici
Coffee/keyboard

FlashRay Interest Abounds! :)

Great to see all the FlashRay interest on this forum! Let me address some of the points above.

1. EF540 vs Clustered Data ONTAP FAS | V-Series | EDGE

Application expectations of performance at the Solid-State level are fundamentally different than performance expectations of disk or hybrid flash+disk storage systems. The latter went from response times of 10's of milliseconds for disk-based to single milliseconds for hybrid systems. Clustered ONTAP-based storage systems do very well in both regards.

However for the former, EF540 uses 1ms as the response time ceiling for most apps and goes on to deliver 300K consistent 4K I/O operations per sec (iops) at that level with enterprise Reliability, Availability & Serviceability (RAS). That performance level is something Data ONTAP and all competitors' disk-based arrays popular today were never designed to do. Hence the need for a different architecture at the <1ms response time level.

Also with regards to SSD capacity, WAFL's log-structured nature (esp during the consistency-point process) benefits from more SSD spindles with relatively small capacity whereas the E-Series controller's pipelined I/O architecture is the opposite and can therefore leverage larger SSD capacities. You can expect to see that relative difference continue across both product lines.

2. EMC Symmetrix Performance vs Reliability

Brevity betrayed me in my original comment. I fully appreciate today that EMC customers don't have to choose between these two. Despite lack of flexibility, today's VMAX & yesterday's DMX are highly mature and reliable Tier1 storage platforms which deliver good performance when configured for the task. Also with excellent RAS.

My comments were made in a historical context. In the early 1990's EMC's ICDA (Integrated Cache Disk Array) architecture used performance to disrupt the IBM DASD market - NOT reliability. EMC encouraged IBM customers to make the move based on PRICE/PERFORMANCE, full period. Stop :)

As customers trusted more and more of their mission critical data to EMC during the back half of the 90's, reliability capabilities and supportability features were gradually added to make it the platform people appreciate today.

So when it comes to FlashRay, NetApp recognizes that solid-state media offers us a once-in-a-lifetime opportunity to leverage a tectonic industry disruption in Tier 1 storage. Early adopters will move to us for the superior Price / Performance NetApp FlashRay will deliver. Especially relative to all other entrenched or start-up vendors who will all lack our rich feature set (N+1 scale-out, QoS/multi-tenancy, variable-block inline dedupe, compression, snaps, clones and of course powerful data replication).

Late adopters will move once we have proven our reliability over time in FlashRay 1.x & 2.x releases.

I hope this helps answer most of the questions above.

0
0
Val Bercovici
Go

Why NetApp moved from Single Architecture to the Portfolio Model

Hi Simon,

Great chat today! Let me elaborate a bit on that last point :)

Data ONTAP has served NetApp customers extremely well over the years, enabling file server consolidation, Unified NAS & SAN arrays and lately the most efficient storage foundation for Server & Desktop Virtualization environments. Entry-level, Mid-Range and High-End FAS & V-Series models running Data ONTAP continue to be fully interoperable from an upgrade / downgrade and data replication perspective. New Clustered Data ONTAP even enables any combination of those to comprise a single Cluster. Data ONTAP EDGE adds yet more virtual storage configuration options to this powerful mix. FlashCache/Pools/Accel accelerate it all.

However every once in a while tectonic shifts occur in a marketplace, opening up complimentary new segments, which necessitate new platforms optimized to the new requirements. NAND Flash media (raw or via SSD) is a perfect catalyst for such change. Additional shifts include Big Data and Extreme capacity or performance-sensitive apps, which often drive separate infrastructure decisions including dedicated storage silos. Satisfying this complimentary new market demand is best done via complimentary new products - hence the NetApp Open Storage for Hadoop, StorageGrid, EF540 and upcoming FlashRay product lines.

So the new NetApp "Portfolio" can be summarized as Clustered Data ONTAP arrays for Shared Virtual Infrastructure, EF540 (+ eventually FlashRay) for sub-millisecond I/O, then E-Series based NetApp Open Storage for Hadoop or HPC plus StorageGrid to address Big Data.

-Val.

0
0

NetApp's Cloud Czar predicts the death of VMAX

Val Bercovici
Thumb Up

Val Bercovici

Thanks for picking up my blog Chris.

NetApp's Virtual Storage Tier (VST) architecture spans a continuum from the (solid state or spinning) disk through to the array (cache) all the way up to the server hosting apps in question.

Today the Goldilocks scenario reported by Andrew Nowinski from Piper Jaffray is meant to emphasize that DataONTAP 8.1 cmode is the "just right" Unified Scale-out storage array for the sweet spot of the Shared Virtual Infrastructure market, which also happens to be at the center of our VST architecture.

We will also soon fill out the edges of the VST continuum with real-time, granular, self-managed and de-duped tiering at the disk and server/host layers. Stay tuned to my blog for further updates later this summer! :)

-Val.

1
2

NetApp dumps Filerview for new model

Val Bercovici
Happy

Survey Says ...

Our customer surveys indicated the overwhelming majority managed their storage from a Windows admin workstation. They also indicated their strong preference for a responsive UI, hence the MMC approach for NSM.

Linux / Unix customers also preferred the CLI to any GUI. Nevertheless, FilerView will not go away with the release of NSM. We will monitor customer feedback for the pace of any eventual FilerView phaseout.

Finally, NSM is supported under most Virtual or Remote Desktop configuration, offering GUI access to NSM from almost any modern client OS platform.

Val Bercovici

NetApp Office of the CTO

0
0

What's next for NetApp hardware?

Val Bercovici
Happy

SpinNP is the answer to mainstream storage scale-out scalability

Hi Chris,

Really interesting speculation here. Without raining on VirtenSys' parade too much, our Data ONTAP 8 scalability is based on a transport-independent protocol called SpinNP. While it is capable of accommodating PCIe, I would urge you to look at Cisco's Data Center Class & Brocade's Enhanced (lossless) Ethernet technologies as our interconnects of choice when we ship the first phase of our mainstream scale-out storage family later this year.

Your conclusion is spot-on though! With DOT8, NetApp customers will be able to aggregate any number of FAS controllers (in pairs) as nodes in a single system image with atomic management properties and linear performance as well as capacity scalability.

Val Bercovici

Office of the CTO

NetApp

0
0

Storage vendor bloggers - losing data or losing the plot?

Val Bercovici
Go

The plot thickens indeed!

Vinanti (or should I call you FemmeFatale?) - thanks for chiming in here (and on my blog) with relevant objective technical detail!

This is precisely the kind of background info that explains my position against EMC's opaque stance regarding this issue. True to form, EMC's bloggers are now busy shutting down comments on their related blogs just as EMC's PR people did years ago when this Centera silent data corruption issue was first exposed - then covered up by the IT media.

Unfortunately, it's the innocent EMC Centera customers and archive software partners (like Symantec) that now have to live with this Archive Russian Roulette scenario. They'll never know what data went missing forever until they try to retrieve it.

For all those who used the default EMC Centera configurations of collision detection OFF with SIS, I strongly recommend following the "Next Steps" listed on my blog -

http://blogs.netapp.com/exposed/2009/01/emc-centera-cus.html

0
0
Val Bercovici
Thumb Up

The Exposure Continues

Hello Coward and other commenters,

Please do keep the comments coming! My goal is to add exposure to the key topic of compliance archive data integrity, not to win tete-a-tete battles over 3rd party knowledgebase semantics.

Transparency on this topic is very important to me, and I've decided putting up with online abuse is a small price to pay for the increased customer trust this exercise will result in once disturbing veils of secrecy around EMC Centera data integrity are finally removed.

-Val.

http://blogs.netapp.com/exposed/2009/02/its-never-the-u.html

0
0
Val Bercovici
Pirate

Straight from the EMC Playbook

Hi marc / Barry,

Classic maneuver, right from the EMC playbook. Personalize the discussion and attack the whistleblower to distract from the facts.

Thanks for playing along:

http://blogs.netapp.com/exposed/2009/02/will-the-real-s.html

-Val.

0
0
Val Bercovici
Alien

Does the Reg use Centera? 2nd attempt at comment :)

Hi Chris,

First of all - thanks for bringing MUCH needed attention to this whole issue. I also commend your attempts at objectivity.

However, as they say, the plot THIKENS! :-)

As you may be aware, the original title of the Symantec KB Article is:

"Archiving items in Enterprise Vault to an EMC Centera may result in data loss."

I'll leave it as an exercise to the reader regarding what "behind the scenes" activity inspired the change once I highlighted it on my blog :)

Regardless, the simple facts remain that this newly revised KB article still shows one, and only one archiving platform vulnerable to data loss - EMC Centera.

There is no Symantec (or any other popular archiving ISV for that matter) KB article warning of potential risks with archiving to NetApp SnapLock. Will there every be? Who knows, but I like NetApp's odds due to one key difference - SIMPLICITY.

Call it what you will, but the EMC Centera API is a huge and complex beast to work with. NetApp's (optional) SnapLock API is the model of simplicity by comparison and is often unnecessary since nearly all archiving vendors support direct filesystem (based on standards) access anyway.

It is a gross oversimplification to correctly label both Data ONTAP and EMC CentraStor as complex pieces of software - yet then conclude that their resulting levels of data integrity are similar. Especially when a an empirical Google search will yield many examples of data loss with one and none with the other.

I personally find it fascinating that some companies (such as Procedo) have built entire practices (i.e. PAMM) around migrating data away from EMC Centera onto safer platforms.

I guess where there's smoke...

-Val Bercovici

Office of the CTO, NetApp

0
0

Pillar towers over rivals in best value storage

Val Bercovici
Go

Welcome to the club

I congratulate Pillar on publishing their first SPC-1 result.

I'd like to encourage them (and others) to add even more value to their results by publishing with rich functionality like Thin Provisioning, Snapshots, Clones, etc... enabled next time. That will help customers get an even better approximation for the scalability of their desired solution.

I'd also encourage all the SPC-1 skeptics to do some research and review the elaborate and transparent policies & procedures used by SPC members to publish. Of note is the independent audit of every report and right of any SPC member to force the publisher (usually a competitor) of a technically flawed or invalid report to revoke it.

In light of all that, has anyone ever wondered why SPC member Dell never requested the CLARiiON report published by NetApp be revoked?

-Val.

Office of the CTO, NetApp

http://blogs.netapp.com/exposed

0
0

Forums