* Posts by dikrek

87 posts • joined 16 Nov 2010

Page:

The kid is not VSAN: EMC buffs up ScaleIO for high-end types

dikrek

Re: Ah the good 'ol quick jab

Actually my point is checksums are kinda assumed in storage, taken for granted.

Most people won't even think to ask since it's unthinkable to not do it.

Yet EMC wasn't doing this until recently (for those 2 products only, on their other storage platforms they do plenty of checksumming).

It's not trashing them, it's more about reminding people not to take this stuff for granted.

Ask!

0
1
dikrek
Boffin

Only now getting checksums? After all this time?

Hi all, Dimitris from Nimble here.

Is it just me or is the addition of checksums in both VSAN 6.2 and the latest ScaleIO a bit glossed over?

Checksums are immensely important to storage - so important that nobody in their right mind should buy storage that doesn't do quite comprehensive detection and correction of insidious errors.

My sense of wonder stems from the fact that EMC and VMware have been happily selling ScaleIO and VSAN to customers without mentioning this gigantic omission up until now. And now it's quietly mentioned among many other things.

Interesting...

Thx

D

0
0

Back-to-the-future Nexsan resurrects its SATABeast

dikrek
Boffin

Nobody thinks of torque and vibration?

Hi All, Dimitris from NetApp here.

I'm shocked anyone thinks a top-loading shelf full of heavy SATA drives is a good idea. You pull the ENTIRE THING out in order to replace a SINGLE drive??

How safe is that?

Both for drive vibration (you're shaking 60 rotating drives at once) and torque (such a system is HEAVY!)

There is a better way. Front-loading trays (imagine that). On our E-Series platform, the 60-drive shelf is divided into 5 slices, each with 12 drives.

Each slice is shock mounted, much lighter than all 60 drives, and slides out butter-smooth in order to replace a drive.

Thx

D

0
3

After all the sound and fury, when will VVOL start to rock?

dikrek
Boffin

Very few arrays can do VVOL at scale

Hi all, Dimitris from NetApp here (recoverymonkey.org).

This is why we allow 96000 LUNs in an 8-node ONTAP cluster, as of ONTAP 8.3.0 and up.

Thx

D

1
0

HPE beefs up entry MSA with a bit of flash

dikrek

Hi all, Dimitris from NetApp here.

Not understanding the comparison to the FAS2500.

The FAS runs ONTAP and has an utterly ridiculous amount of features the MSA lacks.

A better comparison would be to the NetApp E2700, a much more similar in scope device to the MSA.

Of course, whoever provided the math (I doubt it was Chris that did it) might not have a very good case then ;)

Thx

D

1
1

NetApp hits back at Wikibon in cluster fluster bunfight

dikrek
Boffin

Re: Some extra detail

Hi Trevor, my response was too long to just paste, it merited a whole new blog entry:

http://recoverymonkey.org/2016/02/05/7-mode-to-clustered-ontap-transition/

Regarding your performance questions: Not sure if you are aware but posting bakeoff numbers between competitors is actually illegal and in violation of EULAs.

We have to rely on audited benchmarks like SPC-1, hopefully more of the startups will participate in the future.

I suggest you read up on that benchmark, it's extremely intensive, and we show really good latency stability.

Though gaming SPC-1 is harder than with other benchmarks, it too can be gamed. In my blog I explain how to interpret the results. Typically, if the used data:RAM ratio is too low, something is up.

Which explains a certain insane number from a vendor running the benchmark on a single Windows server... :)

Take care folks

D

0
0
dikrek
Boffin

Re: queue the Ontap Fan boys

<sigh>

Flash makes everything CPU and pipe bound. If anything, CPU speed is becoming commoditized very rapidly... :)

And yes, cDOT 8.3.0 and up IS seriously fast. We have audited benchmarks on this, not just anonymous opinion. I understand that competitors don't like/can't come to terms with this. Such is the way the cookie crumbles. Look up the term "confirmation bias".

Regarding SolidFire: It's not about speed vs cDOT. As a platform, SolidFire is very differentiated not just vs cDOT but also vs the rest of the AFA competition.

The SolidFire value prop is very different than ONTAP and performance isn't one of the differentiators.

What are some of the SolidFire differentiators:

- very nicely implemented QoS system

- very easy to scale granularly

- each node can be a different speed/size, no need to keep the system homogeneous

- ridiculously easy to use at scale

- little performance impact regardless of what fails (even an entire node with 10x SSD failing at once)

- the ability to run the SolidFire code on certified customer-provided servers

- great OpenStack integration

All in all, a very nice addition to the portfolio. For most customers it will be very clear which of the three NetApp AFA platforms they want to settle on.

Thx

D

8
4
dikrek
Boffin

Some extra detail

Hi All, Dimitris from NetApp here.

It is important to note that Mr. Floyer’s entire analysis is based on certain very flawed assumptions. Here is some more detail on just a couple of the assumptions:

1. Time/cost of migrations:

a. The migration effort is far smaller than is stated in the article. NetApp has a tool (7MTT) that dramatically helps with automation, migration speed and complexity.

b. It is important to note that moving from 7-mode to a competitor would not have the luxury of using the 7MTT tool and would, indeed, result in an expensive, laborious move (to a less functional and/or stable product).

c. With ONTAP 8.3.2, we are bringing to the market Copy Free Transition (CFT). Which does what the name suggests: It converts disk pools from 7-mode to cDOT without any data movement. This dramatically cuts the cost and time of conversions even more (we are talking about only a few hours to convert a massively large system).

d. NetApp competitors typically expect a complete forklift migration every 3-4 years, which would increase the TCO! Mr. Floyer should factor an extra refresh cycle in his calculations…

2. Low Latency Performance:

a. AFF (All-Flash FAS) with ONTAP 8.3.0 and up is massively faster than 7-mode or even cDOT with older ONTAP releases. To the order of up to 3-4x lower latency. cDOT running 8.3.0+ has been extensively optimized for flash.

b. As a result, sub-ms response times can be achieved with AFF. Yet Mr. Floyer’s article states ONTAP is not a proper vehicle for low latency applications and instead recommends competing platforms that in real life don’t perform consistently at sub-ms response times (in fact we beat those competitors in bakeoffs regularly).

c. AFF has an audited, published SPC-1 result using 8.3.0 code, showing extremely impressive, consistent, low latency performance for a tough workload that’s over 60% writes! See here for a comparative analysis: http://bit.ly/1EhAivY (and with 8.3.2, performance is significantly better than 8.3.0).

So what happens to Mr. Floyer's analysis once the cost and performance arguments are defeated?

Thx

D

9
4

Don’t get in a cluster fluster, Wikibon tells NetApp users

dikrek
Boffin

Some extra detail

Hi All, Dimitris from NetApp here.

It is important to note that Mr. Floyer’s entire analysis is based on certain very flawed assumptions. Here is some more detail on just a couple of the assumptions:

1. Time/cost of migrations:

a. The migration effort is far smaller than is stated in the article. NetApp has a tool (7MTT) that dramatically helps with automation, migration speed and complexity.

b. It is important to note that moving from 7-mode to a competitor would not have the luxury of using the 7MTT tool and would, indeed, result in an expensive, laborious move (to a less functional and/or stable product).

c. With ONTAP 8.3.2, we are bringing to the market Copy Free Transition (CFT). Which does what the name suggests: It converts disk pools from 7-mode to cDOT without any data movement. This dramatically cuts the cost and time of conversions even more (we are talking about only a few hours to convert a massively large system).

d. NetApp competitors typically expect a complete forklift migration every 3-4 years, which would increase the TCO! Mr. Floyer should factor an extra refresh cycle in his calculations…

2. Low Latency Performance:

a. AFF (All-Flash FAS) with ONTAP 8.3.0 and up is massively faster than 7-mode or even cDOT with older ONTAP releases. To the order of up to 3-4x lower latency. cDOT running 8.3.0+ has been extensively optimized for flash.

b. As a result, sub-ms response times can be achieved with AFF. Yet Mr. Floyer’s article states ONTAP is not a proper vehicle for low latency applications and instead recommends competing platforms that in real life don’t perform consistently at sub-ms response times (in fact we beat those competitors in bakeoffs regularly).

c. AFF has an audited, published SPC-1 result using 8.3.0 code, showing extremely impressive, consistent, low latency performance for a tough workload that’s over 60% writes! See here for a comparative analysis: http://bit.ly/1EhAivY (and with 8.3.2, performance is significantly better than 8.3.0).

So what happens to Mr. Floyer's analysis once the cost and performance arguments are defeated?

Thx

D

2
0

HDS brings out all-flash A series array

dikrek
Boffin

Re: Thin provisioning is not a "saving"

Hi all, Dimitris from NetApp here.

Indeed, thin provisioning is not "true" savings. Many vendors claim 2:1 savings from thin provisioning alone.

Rob, agreed: It's easy to demonstrate 100:1 savings from that feature by massively overprovisioning.

It is what it is, that's the state of capacity reporting these days and marketing claims. Most arrays are showing savings including thin provisioning, PLUS compression and dedupe where available.

Check this out for some pointers on how to calculate savings and ignore what the GUI shows:

http://recoverymonkey.org/2015/06/15/calculating-the-true-cost-of-space-efficient-flash-solutions/

A lot depends on your perspective and what you're comparing the system to.

I've seen a NetApp system for Oracle run at 10000% (yes ten thousand percent) efficiency since that customer was using an insane number of DB clones.

If your existing system can't do fast, non-performance-impacting clones, then clearly comparing it to the NetApp system would mean NetApp would show as hugely efficient.

If, on the other hand, your system can also do the fancy clones, AND thin provisioning, then in order to compare efficiencies you need to compare other things...

Thx

D

1
0

DataCore scores fastest ever SPC-1 response times. Yep, a benchmark

dikrek
Boffin

Second article about this so soon?

Hi all, Dimitris from NetApp here.

Interesting that there's another article about this.

Plenty of comments in the other reg article:

http://forums.theregister.co.uk/forum/1/2016/01/07/datacores_benchmark_price_performance/

Indeed, latency is crucial, but Datacore's benchmark isn't nicely comparable with the rest of the results for 2 big reasons:

1. There's no controller HA, just drive mirroring. Controller HA is where a LOT of the performance is lost in normal arrays.

2. The amount of RAM is huge vs the "hot" data in the benchmark. SPC-1 has about 7% "hot" data. If the RAM is large enough to comfortably encompass a lot of the benchmark hot data, then latencies can indeed look stellar, but hitting the actual media more can be more realistic.

Thx

D

6
1

DataCore’s benchmarks for SANsymphony-V hit a record high note

dikrek
Boffin

Re: Too little too late

@Crusty - your post was hilarious. Especially this nugget:

" the NetApp has no idea what it's filing and dies a slow painful death in hash calculation hell. Heaven forbid you have two blocks which are accessed often which have a hash collision"

That's right, that's _exactly_ the reason NetApp gear is chosen for Exabyte-class deployments. All the hash collisions help tremendously with large scale installations... :)

http://recoverymonkey.org/2015/10/01/proper-testing-vs-real-world-testing/

Thx

D

0
0
dikrek
Boffin

It helps if one understands how SPC-1 works

Hi all, Dimitris from NetApp here.

The "hot" data in SPC-1 is about 7% of the total capacity used. Having a lot of cache really helps if a tiny amount of capacity is used.

Ideally, a large enough data set needs to be used to make this realistic. Problem is, this isn't really enforced...

In addition, this was a SINGLE server. There's no true controller failover. Making something able to fail over under high load is one of the hardest things in storage land. A single server with a ton of RAM performing fast is not really hard to do. No special software needed. A vanilla OS plus really fast SSD and lots of RAM is all you need.

Failover implies some sort of nonvolatile write cache mirrored to other nodes, and THAT is what takes a lot of the potential performance away from enterprise arrays. The tradeoff is reliability.

For some instruction on how to interpret SPC-1 numbers, check here:

http://recoverymonkey.org/2015/04/22/netapp-posts-spc-1-top-ten-performance-results-for-its-high-end-systems-tier-1-meets-high-functionality-and-high-performance/

http://recoverymonkey.org/2015/01/27/netapp-posts-top-ten-spc-1-price-performance-results-for-the-new-ef560-all-flash-array/

Ignore the pro-NetApp message if you like, but there's actual math behind all this. You can use the math to do your own comparisons.

But for me, the fact there's no controller failover makes this not really comparable to any other result in the list.

Thx

D

3
0

Our storage reporter has breaking news about Data Fabrics. Chris?

dikrek

You don't NEED to move your data

The beauty of the NetApp solution is that it doesn't force you to move your data.

http://www.netapp.com/us/solutions/cloud/private-storage-cloud/

Using this schema, you could burst into multiple cloud providers for compute, yet your data resides in a colo facility that has fast links to the various cloud providers. No need to move around vast amounts of data. Many people like this approach, and use cloud for what it's really good at - rapidly spinning up VMs.

Conversely, if you DO want to move some data into the hyperscale cloud providers, NetApp lets you keep it in the native usable format without needing to do a recovery first. You could then do things like snapmirror between ONTAP VMs between, say, Azure and AWS, and keep data in their native format WITHOUT needing backup software and WITHOUT needing to do a whole restore...

It's all about providing choices. NetApp currently provides far more choices when it comes to cloud deployments than any other storage vendor. You can go in as deep or as shallow as you like, and if you decide you don't feel comfortable in the end, repatriating the data is an easy process.

In addition, there is ALWAYS lock-in no matter how you engineer something. It's either the storage vendor, or the cloud vendor, or the backup tool vendor, or the application vendor, or the operating system...

Even with totally free tools, the lock-in become the free tools. It's not just a cost challenge.

The trick is in figuring out what level of lock-in is acceptable for your business and whether said lock-in actually helps your business long-term more than it creates challenges.

Thx

D

0
0
dikrek

Re: Not repackaging

Adding the SnapMirror engine to AltaVault (and more) as well, plus the VMs won't be unclustered any more.

And there is ALWAYS lock-in. You just have to choose what you want to be locked into, and whether that serves your business needs best.

Seems to me you missed some of the tidbits in the videos.

For instance, being able to drag in a GUI an AltaVault AWS instance (so, a backup and recovery appliance) into an Azure ONTAP instance (so, a storage OS appliance) and doing seamless recovery - the amount of automation is staggering.

No vendor offers that complete flash to disk to multi cloud and back + automation + backup story.

Thx

D

1
0
dikrek
Megaphone

Not repackaging

This is no repackaging. There's all kinds of new software doing all this behind the scenes, we are implementing the same replication protocol across the entire product line...

See this

https://youtu.be/HgArpF3W73Y?t=3038

and this

https://youtu.be/UluLv_YXx-o

Thx

D

0
0
dikrek

Document link

Folks, Dimitris from NetApp here (recoverymonkey.org).

Here's the link to the paper:

http://www.netapponcloud.com/hubfs/Data-Fabric/datafabric-wp.pdf

As you can see we are thinking big.

Data management and solving business problems is where the action is.

Doesn't hurt that the widgets themselves are awesome, either :)

Thx

D

3
5

NetApp slims down latest controller, beefs up channel efforts

dikrek
Trollface

Back your arguments with facts, otherwise they're pointless

Mr. MityDK,

Dimitris from NetApp here. Sucks for all flash? We win performance PoCs against competitors all the time. Imagine if we didn't suck! :)

We also have had zero (yes zero) SSDs worn out since we started shipping flash in arrays many years ago. Imagine if we didn't suck at flash, how much more reliable we could make them ;)

Data management is also far beyond anything from any vendor (traditional ONTAP strength).

Flexibility - also beyond anybody else's gear by a long shot.

BTW: If you're working for a competitor, it's gentlemanly etiquette to disclose affiliation.

If you're on the customer side - then ask for a demo and see for yourself just how much it "sucks". You might be surprised to learn that not everything you see on the Internet is true, especially if coming from competitors (ignoring for a moment the irony).

Pricing per raw TB means nothing anyway, since that number is without efficiencies factored in. Includes the fastest controller, software and support costs, too. Ask for a quote and see just how "expensive" it really is :)

Thx

D

6
4

Why the USS NetApp is a doomed ship

dikrek

Nothing special about it??

Seriously, nothing special about it?

http://recoverymonkey.org/2015/06/24/netapp-enterprise-grade-flash/

An enterprise storage product that is mature in serving all major protocols (and FYI, retrofitting enterprise NAS on other systems is insanely hard, which is why nobody's done it).

With no downtime for pretty much any operation, including things that would mean major downtime for other systems.

With the best application integration tools on the planet.

The best heterogeneous fabric management system in the world (OCI).

Amazing automation (WFA).

Great performance.

Insane scalability.

Technology that literally keeps the lights on (part of the control chain of many power distribution systems).

Or deployed in life or death situations. By the most paranoid organizations in the world.

That's the storage foundation behind the largest companies in the world.

That's nothing special?

I'd love to see what you consider special. Must really be something.

Thx

D

3
0
dikrek

Re: Beware of Confirmation Bias

Hi Trevor - what I'm trying to understand is why did you even write the article?

What purpose does it serve?

Maybe I don't understand what you do for a living. Perchance you could explain it.

You mention all this research you do - are you an analyst? Have you attended analyst briefings at NetApp? We do them all the time.

Thx

D

1
0
dikrek

Beware of Confirmation Bias

Hi Trevor, Dimitris from NetApp here.

Some friendly advice: look up the term "Confirmation Bias".

It can affect us all - the trick is sensing when it happens.

It's just storage, not religion.

Can it solve most business problems for a variety of enterprise sizes more successfully or not? That's the big question.

Learning about the whole portfolio and how it interoperates might prove especially illuminating.

Thx

D

6
0

SolidFire pulls off gloves for unholy storage ding-dong. Ding-ding!

dikrek

Re: Inline Compression

ONTAP 8.3.1 compression is totally different (very high performance - check http://recoverymonkey.org/2015/06/24/netapp-enterprise-grade-flash/) plus AFF has a ton of extra optimizations (including about 70% usable to raw with ADP, to address another comment).

Tech evolves. Most people leaving a storage company have maybe 6 months before their knowledge becomes obsolete.

Maybe focus on the pertinent question:

Which current technology solves more customer problems reliably?

Thx

D

0
0
dikrek
Happy

The big picture is usually more important

Hi all, Dimitris from NetApp here.

It's really easy to point to features the competition doesn't have. For instance, SolidFire has a weak FC and zero NAS capability.

That's stuff that's pretty hard to implement.

In the grand scheme of things, if the inline data efficiencies in ONTAP plus frequent dedupe (say every 5 minutes) are enough, the comparison becomes a mostly religious one.

Ultimately data reduction is just one way to save costs, and there are many places where costs can be saved.

This might help:

http://recoverymonkey.org/2015/06/24/netapp-enterprise-grade-flash/

Thx

D

3
2

NetApp cackles as cheaper FlashRay lurches out of the door

dikrek

Which is why this isn't just about price!

This isn't just pricing.

We have addressed the objective points you're making:

1. The architecture the customer sees is as complex as they need it to be with 8.3.1. You should ask to see a demo of the GUI and offer constructive criticism after that happens. It is of similar simplicity as much less featured systems.

2. See #1 - managing this way easier than before.

3. OPM v2 is the only monitoring tool most customers will need. And it's free to use. The admin GUI now also has plenty of performance info so most customers won't even need OPM (which does much more than competitor monitoring tools).

4. I beg to differ

5. Sales team arrogance? I've heard horror stories about competitor sales team arrogance. I wonder who all those people are.

6. Many customers moved to purpose built because the enteprise systems were not as easy to use, inexpensive and fast as some startups. Now that we've changed all this... :)

Thx

D

0
1
dikrek

Re: Plenty of new stuff was announced!

Hyperbole much? :)

A slight downturn in NetApp sales can equal the combined sales of all startup flash vendors combined.

Think about that for a second. That's how big we are.

Those same companies that don't have a share price to see fall, don't disclose financials, and are burning through VC funds with alacrity. I have news for you: VC funding isn't unlimited.

We have great products and the best flash solution in the marketplace.

#1 Storage Operating System

#1 Storage for Service Providers

#1 Converged Infrastructure

#1 Storage Provider to the Federal Government

#2 In the storage market

You're obviously right, these are the clear hallmarks of a sinking ship ;)

Thx

D

6
1
dikrek
Happy

Plenty of new stuff was announced!

Hi all, Dimitris from NetApp here.

Lots of stuff is new as of June 23rd, check here: http://recoverymonkey.org/2015/06/24/netapp-enterprise-grade-flash/

We understand this upsets competitors and people stuck in old thinking modes, but such is the way the cookie crumbles.

No, we will not apologize for causing angst to competitors ;)

Thx

D

6
3

NetApp's customers resisting Clustered ONTAP transition

dikrek

Re: Here's the bit I don't quite follow ..

you honestly think that's how people buy stuff?

0
0
dikrek

Re: The limitations you mention are either not there or rapidly going away

With 8.3 7MTT is free to use by customers. You don't NEED migration services but maybe your account team felt it would help. Point them my way if you want. Actually feel free to contact me. Plus I can't discuss any roadmap items in this type of forum.

EMC had to migrate XtremIO customers for free because they had TWO destructive upgrades in 1 year for a new product. Plus they can do it since there's a tiny number of XtremIO deployed vs ONTAP, VMAX or VNX. Not a lot of resources needed to do it.

They won't migrate the huge numbers of VMAX or VNX customers for free...

But there are some new developments and we will be making migrations a lot less expensive going forward.

8.3 has more performance stats in the GUI than previous releases and future releases will bring even more. In the meantime you can use OPM (free).

I double-checked and the Exchange integration requiring RDMs or physical LUNs has to do with how Microsoft does Exchange VSS integration vs how SQL does it (VDS). It does require Microsoft to make some changes.

At the moment you can't flip a switch and convert a FAS to a MetroCluster - MetroCluster is far more than just sync replication and needs some extra hardware in order to be set up.

About perfstats - that tool goes far deeper than any normal performance monitoring program. For general performance stuff it's not really needed but I have to admit our support wants one for every case it seems since it captures so much stuff. I can talk to you about that once you reach out.

Thx

D

0
0
dikrek
Stop

The limitations you mention are either not there or rapidly going away

Hi all, Dimitris from NetApp here (recoverymonkey.org).

Technology evolves at a rapid clip - and ONTAP more rapidly than most think.

Think about it - we have needed a disruptive upgrade only TWICE since 1992 (TradVols->FlexVols, and 7mode->cDOT). Yet we allow full hardware re-use and don't force customers to buy all new stuff. It's not like a 7-mode system can't be re-initialized and join a cDOT cluster... :)

Other vendors force major migrations and hardware swaps upon customers every 3-5 years and XtremIO needed TWO destructive upgrades the past year alone. And most startups are too new - will their architecture stand the test of time so much that they need 2 disruptive upgrades or less in over 20 years? Really?

Puts things in perspective a bit.

To adress your points:

1. 7MTT is absolutely available for use by customers now, no PS or transition team needed.

2. E-Series is indeed not designed to make heavy use of the fancy capabilities - it's more an easy, fast, reliable I/O engine.

3. Look at the built-in GUI in ONTAP 8.3. Stats are there. We also DO provide performance stats via AutoSupport.

4. Exchange SnapManager needing RDMs: This is a Microsoft-imposed limitation. For SQL for instance we can use SMSQL even if the SQL VM is running on an NFS data store... :)

For the rest of the "limitations" - there are best practices and then there are true limitations. Don't confuse the two please.

ONTAP is still the most flexible storage OS by far. Does it do EVERYTHING? Nothing does. But it deals with more data center problems than any other storage OS extant today. Simple fact.

Thx

D

0
0

Gartner: Dell nowhere to be seen as storage SSD sales go flat

dikrek
Stop

Re: Figures Don't Add up

Hi all, Dimitris from NetApp (http://recoverymonkey.org).

This is an accounting issue. How does one track AFA sales? For Pure it's easy, everything is AFA.

From EMC, XtremIO is always an AFA. Easy to count.

For NetApp, when this report was run only EF was an AFA, and AFF (All-Flash FAS) wasn't counted since up till now there wasn't a strict all-flash FAS model that actually doesn't work with HDD. The reality is NetApp sells huge amounts of SSD... as do many other vendors.

For example, for HDS the story is similar, I bet they sell a shedload of flash yet aren't high on the chart.

Gotta love it.

Thx

D

1
0

NetApp CTO Jay Kidd resigns and retires from the industry

dikrek
Stop

Dream on

Hi all, Dimitris from NetApp.

You can of course choose to believe whatever you see anonymously posted on the Internet.

Making claims is easy. "XtremIO is shelved!" "Pure is going out of business!"

Substantiating the claims is far harder.

Thx

D

0
0

NetApp's all-flash FAS array is easily in the top, er, six SPC-1 machines

dikrek
Happy

Things at NetApp are looking just fine

It's funny how some people make these predictions. NetApp is a huge player. It reminds me of pundits claiming Apple will go out of business any day now based on nonsense. Dream on.

The simple reality is that NetApp now not only has the most functional enterprise arrays out there (that's been the case for a while now) but can also offer ultra high performance with a very difficult benchmark on a system sized realistically, with RAID-6 equivalent protection instead of RAID10, and 2x the space efficiency of the other players in the Top Ten list.

Look at these results in perspective: Sure, there are small players that are also fast. Speed is relatively easy. But nobody offers the combination of the sheer functionality ONTAP does, the richness of data management options, the insane flexibility, the maturity.

I really don't understand all the hate. We have an awesome, enterprise, multi-function, mature system that can offer very high performance. Get over it.

Thx

D

1
0
dikrek
Stop

Re: Really??

No, it's not like you say at all. Flash isn't fast no matter what. It matters what array you use, what flash media you use.

Look at some of the submissions - their SSD performance is terrible compared to others.

You honestly believe that all platforms are created equal the moment you slap some SSDs inside?

SPC-1 isn't a marketing benchmark. Sure you can game the config to get better speed (something I try to point out in my articles) but the SPC-1 IOPS for a platform are massively lower than the marketing numbers some "leaders in Flash Storage" use... it's a far harder test than marketing benchmarks showing 100% 4KB reads, for instance.

Thx

D

1
1
dikrek

Re: pretty nice results

Hi Nate,

No, we had other systems filled nicely before, and the key one to look at is Application Utilization, which shows the capacity used for the benchmark vs total capacity.

Thx

D

0
0

This post has been deleted by a moderator

Chief architect Beepy ready to take Pure’s flash somewhere new

dikrek

What do you mean "demoted"?

Hi all, Dimitris from NetApp here (recoverymonkey.org)

I'm still stuck on the "demoted" word in the article.

The project is alive and well.

Troglodyte comments starting in 3, 2, 1...

0
0

NetApp veep: 'We've shifted 750,000 all-flash arrays'. Er, really?

dikrek

Re: Allow me to clarify...

Hi Dave,

What tests did you run? We typically crush arrays that do inline dedupe and compression, as long as the test uses realistic data.

There are some important considerations if one uses certain tests with certain arrays.

http://bit.ly/1bKLZjT

For what it's worth, the current EF560 is more than 2x the speed of the 540 in many aspects.

Thx

D

0
0
dikrek
Stop

Really?

Nimble, really? You don't even have an all-flash product. And, if anything, one can claim your product is optimized to work with SATA HDDs, not even all types of HDDs, let alone SSDs. See where claims lead to?

You're focusing on ONTAP as is Nimble's idiom, but we have EF and ONTAP both. Mature platforms, both. Both not originally designed for flash media since it didn't exist back then. Both crazy fast with flash today.

Guess what... It's all code. Code can be changed. And it has. There is this type of creature, you see, called a developer... we have many, many of them :)

Witness the super high performance numbers produced by products that all were "originally" designed for spinning disk. NetApp, HDS, HP - all have great performance with SSDs.

And, if anything, one can argue that ONTAP is more friendly to SSDs than most architectures out there. By intention? Not really - let's call it a happy accident. But if you understand ONTAP internals is is clear that the core of it has nothing to do with spinning rust at all and it's all about data management.

Last I checked, we are in this business to solve customer problems.

And please spare me the "will never happen" arguments. I can make similar ones about your company but I will take the high road. Plus it smells a bit like roadmap-revealing bait, which I won't take.

Thx

D

4
1
dikrek
Happy

Allow me to clarify...

Hi all, Dimitris from NetApp here (http://recoverymonkey.org). So much hate and misinformation. And so many people hiding behind the anonymity the Internet provides... man up and disclose your name and affiliation, otherwise your comments are like tears in the rain.

So...

Mr. "klaxhu": The 750,000 number is actually close to 1 million systems now. I need to have words with the people that provided an old number, but that's not all flash units (the article actually clarifies that at the end). But, as someone rightly pointed, it's a highly mature platform with extra tweaks for flash.

Mr. "Anonymous Coward": Entering the AFA market: we've been in it firmly... and winning, especially when it comes to performance. Maybe even against the company you represent/like.

Mr. "M.B": The EF always was available as a dual controller system.

EF has utterly crushed the competition in multiple bakeoffs in both performance and reliability, and especially consistently low latency. Excellent for a tactical deployment, especially in an environment where low latency is key - where it destroys the competitors that have to waste cycles doing garbage cleanup, compression and dedupe.

All-Flash FAS has all the features, tighter application integration than any competitor, and plugs right into the extremely rich ONTAP ecosystem, with live data migration between all-flash cluster nodes and ones with other media types. A great choice for a more strategic deployment.

NetApp is extremely well positioned for Flash today and in the future.

Thx

D

5
1

Hitachi smashes SPC-1 benchmark, boasts: We HAF ways of crushing 2 million IOPs

dikrek
Stop

Re: Nice and slow!

Hi all, Dimitris from NetApp here.

7 million IOPS means nothing. What kind of IOPS? What latency?

Grid architectures always were able to get big numbers but latency typically takes a hit.

Perchance read a primer on storage performance:

http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/

Storage-agnostic.

Thx

D

6
0

NetApp embiggens E-Series flashbox: Gee, a benchmark... thanks

dikrek
FAIL

Re: The low latency is the star here

Sorry Archaon - the E-Series is as pure a block array as they come. You may want to familiarize yourself with our offerings.

Do visit my article - at 500 microseconds, the EF560 does 196K SPC-1 IOPS at $0.68/op and the 3Par does 129K IOPS at $1.15/op. http://bit.ly/18oWI1R - look at the table towards the end.

My point is - it depends what you want. Functionality? then All-Flash FAS offers more than anyone else at great speeds. Crazy low latency? Then EF has everyone beat for the money.

Thx

D

1
0
dikrek

Re: The low latency is the star here

look at the latency. The key is - what are the SPC-1 IOPS at each latency point?

1
0
dikrek
Stop

The low latency is the star here

Hi everyone, Dimitris from NetApp here.

The EF560 result was meant to be impressive in 2 ways:

1. Low latency for the money

2. High IOPS at a low latency

Maybe this article will clear it up:

http://recoverymonkey.org/2015/01/27/netapp-posts-top-ten-spc-1-price-performance-results-for-the-new-ef560-all-flash-array/

Thx

D

0
0

NetApp's running with the big dogs: All-flash FlashRay hits the street

dikrek

Re: @still Anonymous Coward

Read my original post on this thread - FlashRay today is not meant for those deployments that need multiple controllers. We have OTHER all-flash appliances for sale that DO offer this TODAY though.

0
0
dikrek

@still Anonymous Coward

You say:

"but they've lagged badly in the area of performance, and the performance options now are brutally expensive, certainly enough that the discerning customer is obliged to at least contemplate other options"

Lagged badly vs what? Based on what metrics? Brutally expensive vs what?

Vs something at least as functional AND reliable as NetApp high-performance options?

One of the world's largest companies chose our all-flash gear for one of the most high profile applications on the planet. There was some other gear (many vendors were tested) that was faster but that other gear also miserably failed the reliability tests. And this application is used to generate huge revenue every second.

So it all depends on what you want to achieve in the end.

Ultra high performance with questionable reliability is child's play to achieve, and can be done cost-effectively.

There are a lot of misconceptions in the industry and a lot of FUD flying around.

http://recoverymonkey.org/2014/09/18/when-competitors-try-too-hard-and-miss-the-point-part-two/

Thx

D

0
0
dikrek
Boffin

Hi folks, Dimitris from NetApp here (http://recoverymonkey.org).

First: Mostly anonymous comments - your feedback is like tears in the rain. Use your real name and disclose affiliation. It's the professional thing to do.

Second: Do read the whole article.

"The end is nigh" my left foot, Mr. Man Mountain... (who works for HP - I had to dig up that little tidbit from one of his past posts). Go take care of your own house instead of spreading sensationalistic FUD.

So here's the deal:

1. All-Flash FAS (AFF) is here TODAY and offers far more maturity, robustness, flexibility and functionality than any other AFA. Nobody can offer this particular combination in the AFA space today.

2. EF is here TODAY, offering great reliability, very short I/O pathlengths leading to low latency, and super high speeds, while remaining extremely cost-effective.

3. FlashRay (and more importantly MarsOS) is not a replacement for either - the vision for the final product far surpasses what the other AFAs are doing. The initial release is still appropriate for several deployments but people needing the extra features today should go with AFF or EF.

MarsOS has a ton of innovation - as time passes more will be revealed. But we looked at all the various architectures out there - and instead of developing from scratch, we could have bought another player if we thought they had something significant to add to the table.

The capability to inter-operate with ONTAP for instance is a big deal. We are trying to make having separate silos not be too painful, yet recognize that one product cannot possibly do it all (ONTAP tries and succeeds for 90% of the workloads out there).

We are building a serious, future-proof tech to carry us for the long haul in the solid space arena. Not the same value prop as small vendors whose only goal is to get acquired.

Building enterprise storage is not easy - making something go fast is easy, making it cheap is easy. Add reliability, flexibility, future-proofing etc. and it gets harder and harder.

MarsOS is designed to work with any CPU architecture and is extensible to any solid state type - not merely optimized for NAND flash. Very important to not paint oneself into a corner.

Look at it from a big picture standpoint. I agree it's hard for some vendors since they cannot address the big picture.

Thx

D

6
3

Just how much bang does a FAS8040 box give you for 500,000 bucks

dikrek
Stop

About the SPC-1 benchmark

Hello all, Dimitris from NetApp here (recoverymonkey.org). I posted some of the stuff below on Nate's site but it's also germane here.

FYI: The SPC-1 benchmark "IOPS" are at 60% writes and are NOT a uniform I/O size, nor are they all random. So, for whoever is comparing SPC-1 ops to generic IOPS listed by other vendors - please don't. It's not correct.

Some background on performance: http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/

I have plenty of SPC-1 analyses on my site. I’ll post something soon on the new ones…

BTW: The “Application Utilization” is the far more interesting metric. RAID10 systems will always have this under 50% since, by definition, RAID10 loses half the capacity.

The Application Utilization in the recent NetApp 8040 benchmark was 37.11%, similar and even better than many other systems (for example, the HDS VSP had an Application Utilization of 29.25%. The 3Par F400 result had a very good Application Utilization of 47.97%, the 3Par V800 was 40.22% and the 3Par 7400 had an Application Utilization of 35.23%.)

The fact that we can perform so fast on a benchmark that’s 60% writes, where every other vendor needs RAID10 to play, says something about NetApp technology.

Thx

D

1
1

NetApp shows off tech specs of FAS array BIZ BEAST

dikrek

NVRAM is NOT write cache

Hello all, Dimitris from NetApp here (recoverymonkey.org).

NVRAM is not a write cache. It's more analogous to a DB redo log.

The actual cache is a RAM + SSD combo and is measured in the TB.

Plus we have the free to download Flash Accel server-based cache that further augments array cache.

And no, we do write coalescing very differently than most arrays.

Thx

D

1
1

NetApp musters muscular cluster bluster for ONTAP busters

dikrek
Angel

Re: Clarification Re the cache

There are 2 forms of Flash Caching possible within a NetApp FAS system.

1. Flash Cache. Custom boards that slot into the controller. Upon normal failover, the cache contents are preserved and there's no need for re-warming upon fail-back. But since it's within a node, if you lose that node you lose part of your usable cache.

2. Flash Pool. SSD-based (lives in the disk shelves). This is per disk pool and follows disk pools around if a node goes down. Never needs re-warming no matter the type of outage.

Nate, I think #2 is what you're after. Yes we have it.

Thx

D

1
1
dikrek
Thumb Up

Clarification Re the cache

Hi Chris, Dimitris from NetApp here.

To clarify: the size of a SINGLE cache board can be up to 2TB.

You can have several of those in a system.

Max Flash Cache is 16TB usable (before dedupe) on a 6280 HA pair.

Then that times 4 for an 8-controller cluster... :)

Thx

D

1
0

Page:

Forums