* Posts by dikrek

64 posts • joined 16 Nov 2010

Page:

NetApp cackles as cheaper FlashRay lurches out of the door

dikrek

Which is why this isn't just about price!

This isn't just pricing.

We have addressed the objective points you're making:

1. The architecture the customer sees is as complex as they need it to be with 8.3.1. You should ask to see a demo of the GUI and offer constructive criticism after that happens. It is of similar simplicity as much less featured systems.

2. See #1 - managing this way easier than before.

3. OPM v2 is the only monitoring tool most customers will need. And it's free to use. The admin GUI now also has plenty of performance info so most customers won't even need OPM (which does much more than competitor monitoring tools).

4. I beg to differ

5. Sales team arrogance? I've heard horror stories about competitor sales team arrogance. I wonder who all those people are.

6. Many customers moved to purpose built because the enteprise systems were not as easy to use, inexpensive and fast as some startups. Now that we've changed all this... :)

Thx

D

0
1
dikrek

Re: Plenty of new stuff was announced!

Hyperbole much? :)

A slight downturn in NetApp sales can equal the combined sales of all startup flash vendors combined.

Think about that for a second. That's how big we are.

Those same companies that don't have a share price to see fall, don't disclose financials, and are burning through VC funds with alacrity. I have news for you: VC funding isn't unlimited.

We have great products and the best flash solution in the marketplace.

#1 Storage Operating System

#1 Storage for Service Providers

#1 Converged Infrastructure

#1 Storage Provider to the Federal Government

#2 In the storage market

You're obviously right, these are the clear hallmarks of a sinking ship ;)

Thx

D

6
1
dikrek
Happy

Plenty of new stuff was announced!

Hi all, Dimitris from NetApp here.

Lots of stuff is new as of June 23rd, check here: http://recoverymonkey.org/2015/06/24/netapp-enterprise-grade-flash/

We understand this upsets competitors and people stuck in old thinking modes, but such is the way the cookie crumbles.

No, we will not apologize for causing angst to competitors ;)

Thx

D

6
3

NetApp's customers resisting Clustered ONTAP transition

dikrek

Re: Here's the bit I don't quite follow ..

you honestly think that's how people buy stuff?

0
0
dikrek

Re: The limitations you mention are either not there or rapidly going away

With 8.3 7MTT is free to use by customers. You don't NEED migration services but maybe your account team felt it would help. Point them my way if you want. Actually feel free to contact me. Plus I can't discuss any roadmap items in this type of forum.

EMC had to migrate XtremIO customers for free because they had TWO destructive upgrades in 1 year for a new product. Plus they can do it since there's a tiny number of XtremIO deployed vs ONTAP, VMAX or VNX. Not a lot of resources needed to do it.

They won't migrate the huge numbers of VMAX or VNX customers for free...

But there are some new developments and we will be making migrations a lot less expensive going forward.

8.3 has more performance stats in the GUI than previous releases and future releases will bring even more. In the meantime you can use OPM (free).

I double-checked and the Exchange integration requiring RDMs or physical LUNs has to do with how Microsoft does Exchange VSS integration vs how SQL does it (VDS). It does require Microsoft to make some changes.

At the moment you can't flip a switch and convert a FAS to a MetroCluster - MetroCluster is far more than just sync replication and needs some extra hardware in order to be set up.

About perfstats - that tool goes far deeper than any normal performance monitoring program. For general performance stuff it's not really needed but I have to admit our support wants one for every case it seems since it captures so much stuff. I can talk to you about that once you reach out.

Thx

D

0
0
dikrek
Stop

The limitations you mention are either not there or rapidly going away

Hi all, Dimitris from NetApp here (recoverymonkey.org).

Technology evolves at a rapid clip - and ONTAP more rapidly than most think.

Think about it - we have needed a disruptive upgrade only TWICE since 1992 (TradVols->FlexVols, and 7mode->cDOT). Yet we allow full hardware re-use and don't force customers to buy all new stuff. It's not like a 7-mode system can't be re-initialized and join a cDOT cluster... :)

Other vendors force major migrations and hardware swaps upon customers every 3-5 years and XtremIO needed TWO destructive upgrades the past year alone. And most startups are too new - will their architecture stand the test of time so much that they need 2 disruptive upgrades or less in over 20 years? Really?

Puts things in perspective a bit.

To adress your points:

1. 7MTT is absolutely available for use by customers now, no PS or transition team needed.

2. E-Series is indeed not designed to make heavy use of the fancy capabilities - it's more an easy, fast, reliable I/O engine.

3. Look at the built-in GUI in ONTAP 8.3. Stats are there. We also DO provide performance stats via AutoSupport.

4. Exchange SnapManager needing RDMs: This is a Microsoft-imposed limitation. For SQL for instance we can use SMSQL even if the SQL VM is running on an NFS data store... :)

For the rest of the "limitations" - there are best practices and then there are true limitations. Don't confuse the two please.

ONTAP is still the most flexible storage OS by far. Does it do EVERYTHING? Nothing does. But it deals with more data center problems than any other storage OS extant today. Simple fact.

Thx

D

0
0

Gartner: Dell nowhere to be seen as storage SSD sales go flat

dikrek
Stop

Re: Figures Don't Add up

Hi all, Dimitris from NetApp (http://recoverymonkey.org).

This is an accounting issue. How does one track AFA sales? For Pure it's easy, everything is AFA.

From EMC, XtremIO is always an AFA. Easy to count.

For NetApp, when this report was run only EF was an AFA, and AFF (All-Flash FAS) wasn't counted since up till now there wasn't a strict all-flash FAS model that actually doesn't work with HDD. The reality is NetApp sells huge amounts of SSD... as do many other vendors.

For example, for HDS the story is similar, I bet they sell a shedload of flash yet aren't high on the chart.

Gotta love it.

Thx

D

1
0

NetApp CTO Jay Kidd resigns and retires from the industry

dikrek
Stop

Dream on

Hi all, Dimitris from NetApp.

You can of course choose to believe whatever you see anonymously posted on the Internet.

Making claims is easy. "XtremIO is shelved!" "Pure is going out of business!"

Substantiating the claims is far harder.

Thx

D

0
0

NetApp's all-flash FAS array is easily in the top, er, six SPC-1 machines

dikrek
Happy

Things at NetApp are looking just fine

It's funny how some people make these predictions. NetApp is a huge player. It reminds me of pundits claiming Apple will go out of business any day now based on nonsense. Dream on.

The simple reality is that NetApp now not only has the most functional enterprise arrays out there (that's been the case for a while now) but can also offer ultra high performance with a very difficult benchmark on a system sized realistically, with RAID-6 equivalent protection instead of RAID10, and 2x the space efficiency of the other players in the Top Ten list.

Look at these results in perspective: Sure, there are small players that are also fast. Speed is relatively easy. But nobody offers the combination of the sheer functionality ONTAP does, the richness of data management options, the insane flexibility, the maturity.

I really don't understand all the hate. We have an awesome, enterprise, multi-function, mature system that can offer very high performance. Get over it.

Thx

D

1
0
dikrek
Stop

Re: Really??

No, it's not like you say at all. Flash isn't fast no matter what. It matters what array you use, what flash media you use.

Look at some of the submissions - their SSD performance is terrible compared to others.

You honestly believe that all platforms are created equal the moment you slap some SSDs inside?

SPC-1 isn't a marketing benchmark. Sure you can game the config to get better speed (something I try to point out in my articles) but the SPC-1 IOPS for a platform are massively lower than the marketing numbers some "leaders in Flash Storage" use... it's a far harder test than marketing benchmarks showing 100% 4KB reads, for instance.

Thx

D

1
1
dikrek

Re: pretty nice results

Hi Nate,

No, we had other systems filled nicely before, and the key one to look at is Application Utilization, which shows the capacity used for the benchmark vs total capacity.

Thx

D

0
0

This post has been deleted by a moderator

Chief architect Beepy ready to take Pure’s flash somewhere new

dikrek

What do you mean "demoted"?

Hi all, Dimitris from NetApp here (recoverymonkey.org)

I'm still stuck on the "demoted" word in the article.

The project is alive and well.

Troglodyte comments starting in 3, 2, 1...

0
0

NetApp veep: 'We've shifted 750,000 all-flash arrays'. Er, really?

dikrek

Re: Allow me to clarify...

Hi Dave,

What tests did you run? We typically crush arrays that do inline dedupe and compression, as long as the test uses realistic data.

There are some important considerations if one uses certain tests with certain arrays.

http://bit.ly/1bKLZjT

For what it's worth, the current EF560 is more than 2x the speed of the 540 in many aspects.

Thx

D

0
0
dikrek
Stop

Really?

Nimble, really? You don't even have an all-flash product. And, if anything, one can claim your product is optimized to work with SATA HDDs, not even all types of HDDs, let alone SSDs. See where claims lead to?

You're focusing on ONTAP as is Nimble's idiom, but we have EF and ONTAP both. Mature platforms, both. Both not originally designed for flash media since it didn't exist back then. Both crazy fast with flash today.

Guess what... It's all code. Code can be changed. And it has. There is this type of creature, you see, called a developer... we have many, many of them :)

Witness the super high performance numbers produced by products that all were "originally" designed for spinning disk. NetApp, HDS, HP - all have great performance with SSDs.

And, if anything, one can argue that ONTAP is more friendly to SSDs than most architectures out there. By intention? Not really - let's call it a happy accident. But if you understand ONTAP internals is is clear that the core of it has nothing to do with spinning rust at all and it's all about data management.

Last I checked, we are in this business to solve customer problems.

And please spare me the "will never happen" arguments. I can make similar ones about your company but I will take the high road. Plus it smells a bit like roadmap-revealing bait, which I won't take.

Thx

D

4
1
dikrek
Happy

Allow me to clarify...

Hi all, Dimitris from NetApp here (http://recoverymonkey.org). So much hate and misinformation. And so many people hiding behind the anonymity the Internet provides... man up and disclose your name and affiliation, otherwise your comments are like tears in the rain.

So...

Mr. "klaxhu": The 750,000 number is actually close to 1 million systems now. I need to have words with the people that provided an old number, but that's not all flash units (the article actually clarifies that at the end). But, as someone rightly pointed, it's a highly mature platform with extra tweaks for flash.

Mr. "Anonymous Coward": Entering the AFA market: we've been in it firmly... and winning, especially when it comes to performance. Maybe even against the company you represent/like.

Mr. "M.B": The EF always was available as a dual controller system.

EF has utterly crushed the competition in multiple bakeoffs in both performance and reliability, and especially consistently low latency. Excellent for a tactical deployment, especially in an environment where low latency is key - where it destroys the competitors that have to waste cycles doing garbage cleanup, compression and dedupe.

All-Flash FAS has all the features, tighter application integration than any competitor, and plugs right into the extremely rich ONTAP ecosystem, with live data migration between all-flash cluster nodes and ones with other media types. A great choice for a more strategic deployment.

NetApp is extremely well positioned for Flash today and in the future.

Thx

D

5
1

Hitachi smashes SPC-1 benchmark, boasts: We HAF ways of crushing 2 million IOPs

dikrek
Stop

Re: Nice and slow!

Hi all, Dimitris from NetApp here.

7 million IOPS means nothing. What kind of IOPS? What latency?

Grid architectures always were able to get big numbers but latency typically takes a hit.

Perchance read a primer on storage performance:

http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/

Storage-agnostic.

Thx

D

6
0

NetApp embiggens E-Series flashbox: Gee, a benchmark... thanks

dikrek
FAIL

Re: The low latency is the star here

Sorry Archaon - the E-Series is as pure a block array as they come. You may want to familiarize yourself with our offerings.

Do visit my article - at 500 microseconds, the EF560 does 196K SPC-1 IOPS at $0.68/op and the 3Par does 129K IOPS at $1.15/op. http://bit.ly/18oWI1R - look at the table towards the end.

My point is - it depends what you want. Functionality? then All-Flash FAS offers more than anyone else at great speeds. Crazy low latency? Then EF has everyone beat for the money.

Thx

D

1
0
dikrek

Re: The low latency is the star here

look at the latency. The key is - what are the SPC-1 IOPS at each latency point?

1
0
dikrek
Stop

The low latency is the star here

Hi everyone, Dimitris from NetApp here.

The EF560 result was meant to be impressive in 2 ways:

1. Low latency for the money

2. High IOPS at a low latency

Maybe this article will clear it up:

http://recoverymonkey.org/2015/01/27/netapp-posts-top-ten-spc-1-price-performance-results-for-the-new-ef560-all-flash-array/

Thx

D

0
0

NetApp's running with the big dogs: All-flash FlashRay hits the street

dikrek

Re: @still Anonymous Coward

Read my original post on this thread - FlashRay today is not meant for those deployments that need multiple controllers. We have OTHER all-flash appliances for sale that DO offer this TODAY though.

0
0
dikrek

@still Anonymous Coward

You say:

"but they've lagged badly in the area of performance, and the performance options now are brutally expensive, certainly enough that the discerning customer is obliged to at least contemplate other options"

Lagged badly vs what? Based on what metrics? Brutally expensive vs what?

Vs something at least as functional AND reliable as NetApp high-performance options?

One of the world's largest companies chose our all-flash gear for one of the most high profile applications on the planet. There was some other gear (many vendors were tested) that was faster but that other gear also miserably failed the reliability tests. And this application is used to generate huge revenue every second.

So it all depends on what you want to achieve in the end.

Ultra high performance with questionable reliability is child's play to achieve, and can be done cost-effectively.

There are a lot of misconceptions in the industry and a lot of FUD flying around.

http://recoverymonkey.org/2014/09/18/when-competitors-try-too-hard-and-miss-the-point-part-two/

Thx

D

0
0
dikrek
Boffin

Hi folks, Dimitris from NetApp here (http://recoverymonkey.org).

First: Mostly anonymous comments - your feedback is like tears in the rain. Use your real name and disclose affiliation. It's the professional thing to do.

Second: Do read the whole article.

"The end is nigh" my left foot, Mr. Man Mountain... (who works for HP - I had to dig up that little tidbit from one of his past posts). Go take care of your own house instead of spreading sensationalistic FUD.

So here's the deal:

1. All-Flash FAS (AFF) is here TODAY and offers far more maturity, robustness, flexibility and functionality than any other AFA. Nobody can offer this particular combination in the AFA space today.

2. EF is here TODAY, offering great reliability, very short I/O pathlengths leading to low latency, and super high speeds, while remaining extremely cost-effective.

3. FlashRay (and more importantly MarsOS) is not a replacement for either - the vision for the final product far surpasses what the other AFAs are doing. The initial release is still appropriate for several deployments but people needing the extra features today should go with AFF or EF.

MarsOS has a ton of innovation - as time passes more will be revealed. But we looked at all the various architectures out there - and instead of developing from scratch, we could have bought another player if we thought they had something significant to add to the table.

The capability to inter-operate with ONTAP for instance is a big deal. We are trying to make having separate silos not be too painful, yet recognize that one product cannot possibly do it all (ONTAP tries and succeeds for 90% of the workloads out there).

We are building a serious, future-proof tech to carry us for the long haul in the solid space arena. Not the same value prop as small vendors whose only goal is to get acquired.

Building enterprise storage is not easy - making something go fast is easy, making it cheap is easy. Add reliability, flexibility, future-proofing etc. and it gets harder and harder.

MarsOS is designed to work with any CPU architecture and is extensible to any solid state type - not merely optimized for NAND flash. Very important to not paint oneself into a corner.

Look at it from a big picture standpoint. I agree it's hard for some vendors since they cannot address the big picture.

Thx

D

6
3

Just how much bang does a FAS8040 box give you for 500,000 bucks

dikrek
Stop

About the SPC-1 benchmark

Hello all, Dimitris from NetApp here (recoverymonkey.org). I posted some of the stuff below on Nate's site but it's also germane here.

FYI: The SPC-1 benchmark "IOPS" are at 60% writes and are NOT a uniform I/O size, nor are they all random. So, for whoever is comparing SPC-1 ops to generic IOPS listed by other vendors - please don't. It's not correct.

Some background on performance: http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/

I have plenty of SPC-1 analyses on my site. I’ll post something soon on the new ones…

BTW: The “Application Utilization” is the far more interesting metric. RAID10 systems will always have this under 50% since, by definition, RAID10 loses half the capacity.

The Application Utilization in the recent NetApp 8040 benchmark was 37.11%, similar and even better than many other systems (for example, the HDS VSP had an Application Utilization of 29.25%. The 3Par F400 result had a very good Application Utilization of 47.97%, the 3Par V800 was 40.22% and the 3Par 7400 had an Application Utilization of 35.23%.)

The fact that we can perform so fast on a benchmark that’s 60% writes, where every other vendor needs RAID10 to play, says something about NetApp technology.

Thx

D

1
1

NetApp shows off tech specs of FAS array BIZ BEAST

dikrek

NVRAM is NOT write cache

Hello all, Dimitris from NetApp here (recoverymonkey.org).

NVRAM is not a write cache. It's more analogous to a DB redo log.

The actual cache is a RAM + SSD combo and is measured in the TB.

Plus we have the free to download Flash Accel server-based cache that further augments array cache.

And no, we do write coalescing very differently than most arrays.

Thx

D

1
1

NetApp musters muscular cluster bluster for ONTAP busters

dikrek
Angel

Re: Clarification Re the cache

There are 2 forms of Flash Caching possible within a NetApp FAS system.

1. Flash Cache. Custom boards that slot into the controller. Upon normal failover, the cache contents are preserved and there's no need for re-warming upon fail-back. But since it's within a node, if you lose that node you lose part of your usable cache.

2. Flash Pool. SSD-based (lives in the disk shelves). This is per disk pool and follows disk pools around if a node goes down. Never needs re-warming no matter the type of outage.

Nate, I think #2 is what you're after. Yes we have it.

Thx

D

1
1
dikrek
Thumb Up

Clarification Re the cache

Hi Chris, Dimitris from NetApp here.

To clarify: the size of a SINGLE cache board can be up to 2TB.

You can have several of those in a system.

Max Flash Cache is 16TB usable (before dedupe) on a 6280 HA pair.

Then that times 4 for an 8-controller cluster... :)

Thx

D

1
0
dikrek
Gimp

Re: Aggregate up to 400TB - LUN Size?

Howdy, Dimitris from NetApp here (www.recoverymonkey.org).

Max LUN size: 16TB.

Max exported share: 20PB.

Mr. Draco - I did check all your previous posts here on the register. Highly critical of everyone but one vendor. It's good etiquette to disclose affiliation.

Thx

D

2
0

NetApp snoozing at the wheel of incumbency juggernaut, says chap

dikrek

Re: Interesting that this gets posted during NetApp's fiscal year end

Oh - maybe the analysts should read this:

http://www.barnetttalks.com/2013/03/the-slow-incumbent-myth.html

0
0
dikrek

Interesting that this gets posted during NetApp's fiscal year end

Hello all,

Dimitris from NetApp here (www.recoverymonkey.org)

I did find it interesting that such a "doom and gloom" article is posted during NetApp's fiscal year end.

Sure - we are an incumbent now, having the #1 storage OS in the world: http://searchstorage.techtarget.com/NetApp-Data-ONTAP-Is-Now-the-Industrys-No-1-Storage-OS (this is by revenue - the same holds by sheer capacity, too).

It's similar to the attacks Apple receives. Another company that innovated and was "different" and now is bigger than most. And they're not going anywhere despite all the attacks. And they're not stopping to innovate.

Innovation needs to be followed by a maturity period to encourage broad adoption of said innovation.

The various flash arrays out there for example offer innovation but not maturity yet - and these waves of innovation and maturity can take years.

Check out the "Hype Cycle": http://en.wikipedia.org/wiki/Hype_cycle

Thx

D

0
0

How many VMs can you stuff in that box? How to get into the VDI biz

dikrek
Angel

Storage is just one aspect of overall VDI cost

Hi all, Dimitris from NetApp here (www.recoverymonkey.org).

No flogging of wares (in either form of the word's meaning).

Just wanted to mention that storage is just one aspect of overall VDI cost. You have OS licenses, connection broker licenses, hypervisor licenses, servers to be purchased, thin devices, really solid networking, and all manner of other stuff that's usually not free.

Focusing so much on the storage is like saying:

I want to build a blender, and I'm just focusing on the motor.

A blender as a solution is more than the motor. Yes, a reliable motor is really important, but you need a solid housing, the bowl, blades, motor control, and a reliable way to transmit the motor movement to the blades.

D

0
0

NetApp: Flash as a STORAGE tier? You must be joking

dikrek
Coat

About NetApp performance

Hi all, Dimitris from NetApp here.

It would be nice if the various anonymouns posters that seem to speak with such authority divulged who they are and where they work (and even nicer if it were true).

NetApp systems run the largest DBs on the planet, including a bunch of write-intensive and latency-sensitive workloads.

ONTAP/WAFL holds its own just fine: http://bit.ly/Mp4uu1

In addition, it's all about maintaining performance without giving up advanced data management features and tight application integration.

Like dragster cars are made to race in a straight line for a short period of time, there exist arrays that are super-fast but are not reliable and/or do not have rich data management features.

Flash is just another way to get more speed for some scenarios.

Depending on where it's located can help, too.

I don't care what array you have at the back end, at some point it runs out of controller steam.

What if you could have 2000 application hosts each with their own augmented cache that is aware of what's happening in the back-end storage?

Would those 2000 hosts not have more aggregate performance potential than any array can handle?

D

0
2

CEO: NetApp revenues falling, but 'we aren't in a clampdown'

dikrek
Stop

It's not about file serving, it's about data serving

Hello all, Dimitris from NetApp here.

Indeed we invented unified storage and it has been thus for 11 years now. There's no extra box needed to do various protocols and replication - the same OS does it all. Putting a gateway in front of a storage system doesn't necessarily create a coherent solution (invariably the block and file bits end up having way too different capabilities).

There are actual customer benefits to true unified storage:

http://bit.ly/MTCktG and

http://bit.ly/MfPLyc

Reliability and performance are great.

By default there's protection against lost writes, torn pages and misplaced writes, in addition to standard dual-parity write-accelerated RAID.

No, it's more about a difference in philosophy. NetApp does few acquisitions and prefers to do stuff in-house, the rest either OEM NetApp gear and/or rely heavily on acquisition.

D

2
1

NetApp: Steenkin' benchmarks – we're quicker than 3PAR

dikrek
Stop

Explanation of 100% load

Hello all, Dimitris from NetApp here (the person that wrote the article Chris referenced).

It's here: http://bit.ly/Mp4uu0

Anyway, to clarify since there seems to be some confusion:

The 100% load point is a bit of a misnomer since it doesn't really

mean the arrays tested were maxed out. Indeed, most of the arrays

mentioned could sustain a bigger workload given higher latencies. 3Par

just decided to show what the IOPS at that much higher latency level would be.

The SPC-1 load generators are told to run at a specific target IOPS and

that is chosen to be the load level, the goal being to balance cost, IOPS

and latency.

So yes, it absolutely makes sense to look at the gear necessary to achieve the requisite IOPS vs latency, and the cost to do so.

Databases like their low latency.

And yes, all-SSD arrays will of course provide overall lower latency - usually though one can't afford a reliable, enterprise all-SSD array for a significant amount of storage. You think the prices listed are high? Think again.

What NetApp does with the combination of megacaching and WAFL write acceleration is provide a lot of the benefit of an all-flash architecture at a far more palatable cost (especially given all the other features NetApp has).

D

1
2

NetApp leapfrogs IBM in storage race for second place

dikrek
Megaphone

The NetApp numbers are even better than stated

D from NetApp here...

The IDC numbers don't count the tons of OEM NetApp boxes sold by many companies.

The numbers shown count only the NetApp-branded boxes sold by NetApp.

Once you add all the OEM business the percentage grows quite a bit.

D

0
0

Isilon Maverick flyboy in EMC World flyby

dikrek
Stop

NetApp does have a big SPEC SFS result for scale out

Hello all, D from NetApp here.

Check analysis here: http://bit.ly/K2FBz1

Unfortunately most people don't understand how to read the SPEC submissions, I hope my article clarifies it a bit.

D

0
0

Did EMC buy Xtremio to fend off NetApp

dikrek
Megaphone

Re: Isilon slow?

No, the highest Isilon result published is 1112705 SPEC SFS NFS 2008 ops.

Not 1.6 million.

Linky:

http://bit.ly/s1IFH6

0
0
dikrek
Stop

Everyone is a WAFL expert all of a sudden... :)

J.T. - I don't think you quite understand how ONTAP works. It doesn't write randomly - it actually takes great care in meticulously selecting where writes will go. When writing to flash, we don't need to be quite as meticulous. But for HDD it helps a lot.

In addition, there are important technologies like read reallocate, that will sequentialize upon a read data that was written randomly.

At a block level.

Amazing for databases - where frequently people will write stuff randomly in the DB then there will be a job that needs to read the data sequentially (sequential read after random write).

I'm not aware of any other disk array that will do this optiization for the end user, and leave the blocks optimized for the future (this has nothing to do with caching and readahead).

Not to mention insane new stuff coming in the next few months.

Unfortunately, way too many people think ONTAP is still where it was 20 years ago. Or maybe fortunately, in the case of competitors, since it's so easy to discredit them... :)

The write allocator has been rewritten multiple times in the last 20 years :)

Not to mention everything else, including the entire kernel.

Very relevant with respect to competitor documentation - I often see stuff from them that was maybe a little valid 10 years ago, especially from the smaller vendors that can't afford the resources necessary to understand other people's gear.

This is IT, folks. I'd argue that if you stop intimately understanding a technology for more than 2 years, your knowledge of it is completely obsolete, to the point of being dangerously so.

Here's some fun reading:

http://bit.ly/IVr0Xy

D

0
2
dikrek
Stop

Apples and Oranges

Hi all, Dimitris from NetApp here.

@Nate:

Indeed, ONTAP running in Cluster-Mode and Isilon aren't really designed in similar ways, as I'm explaining towards the end of the article here: http://bit.ly/uuK8tG

Isilon will be strong for high throughput, yet there are other ways to get much higher throughput: http://bit.ly/zqqJ3G

Ultimately, there is no single system that "does it all". Meaning, that you get all the protocols, and dedupe, and fancy app integration, and fancy snaps, and fancy replication, and be able to stripe a volume across umpteen controllers at low latency for random I/O.

If you want to run DBs or VMs at low latency and take advantage of all the cool features mentioned above and be able to get some flexibility with the cluster for migrations and upgrades at zero downtime, you'd be better off using ONTAP Cluster-Mode than Isilon, SONAS, etc.

If you have a workload that needs to be in a single gigantic container bigger than 100TB, with extremely high throughput requirements (not IOPS),and can't break up that workload into smaller than 100TB chunks, then, for now, there are alternative solutions.

Remember that ONTAP Cluster-Mode is designed to be used in general-purpose scenarios.

D

0
2

IDC Storage Tracker: NetApp is losing market share

dikrek

It helps to understand where the numbers are coming from

Hi All, Dimitris from NetApp here.

According to IDC figures, if one looks at the Open Systems Networked Storage marketshare (where NetApp plays, please let's discount storage sold inside servers), NetApp remains #2 after EMC, even though EMC has a plethora of products that are counted yet are not general-purpose storage appliances.

The NetApp share was 14.5% according to IDC, up from 13.3% a year ago.

That’s not called “losing market share”.

Ant Evans got it right. What the figures don’t show is that IDC doesn’t count the ton of NetApp OEM product resold by the likes of IBM, Oracle, Dell, and more. Instead, those sales show up as IBM, Oracle and Dell sales.

Once that’s taken into account, the NetApp share grows to over 21%, or a very very big bump up indeed.

“Lies, damn lies and statistics”..

EMC is still higher at 31.9%, but the gap is narrowing…

Oh, and according to IDC figures, NetApp is #1 in the replication software marketshare, ahead of EMC. 32.7% vs 31.4%. Close, but if one considers the relative size of the 2 companies, and how many EMC products are counted in that percentage, the NetApp figure is even more impressive.

Thoughts?

Thx

D

0
1

Fusion-io demos billion IOPS server config

dikrek

I understand how flash works...

However, 1 billion IOPS at 64 bytes will not translate to 1 billion IOPS at 8K, so something like SQL won't run at 1 billion IOPS on that config.

It has nothing to do with whether it's disks or not.

0
0
dikrek

Did anyone miss that they were doing 64-byte I/O?

Hello all, D from NetApp here.

Bear in mind they were doing 64-byte I/O transfers?

Typical apps, especially DBs, operate at 4K + (mostly 8K + lately).

Probably not a billion IOPS in that case then... :)

D

0
0

iPhone 4S is for failures who work in coffee shops - Samsung

dikrek
Stop

the best device is the one you like using

I was a Blackberry user for years, then moved to the iPhone.

Plenty of things annoy me about the iPhone but overall I like it far more than the 'berry.

I tried other stuff, including Android, and it was missing something FOR ME.

Either the build was not good, or the screen not nice, or too big, phone not clear enough - overall, I couldn't find an Android device I enjoyed using OVERALL.

Not about being a conformist, on the contrary, I love supporting the underdog, as long as the underdog has something special that can enhance my day by day life.

I think Microsoft may have a chance against the iPhone with their new paradigm UI in Windows 7.5 Mango, it's a fresh approach.

I'll let fanbois be fanbois and go back to my 4S.

I upgraded from a 3GS (that I used the hell out of and it still looks like new) and I really like the new phone.

No battery issues, I get easily 2x the life of my 3GS.

The screen is incredible.

The graphics speed way better than any other phone out there (for the Android fanbois, why don't you check out some benchmarks and some actual games).

The upgrade process was as seamless as possible, all automated.

It's not just the device - it's everything supporting the device.

The only thing I'd change?

I still don't like the fact that it's essentially 2 pieces of glass with a metal spacer in between.

Unless you have a case it will break with a high degree of certainty if it falls on something hard.

D

4
4

NetApp accused of short-stroking its new hardness

dikrek
Stop

Isilon short-stroked more

D from NetApp here...

Please everyone - you can go ahead to the spec.org site and read both submissions. Tons of detail without he-said-she-said theatricals.

NetApp:http://bit.ly/utDOQR

Isilon: http://bit.ly/s1IFH6

Isilon used about 1/7th of all available space (counting over 800TB of space).

NetApp used about 1/3rd of all available space (counting over 500TB of space).

All clearly stated in the official submissions.

Do the math.

Anyway, the point of the NetApp architecture is primarily that it's a general-purpose unified storage system, with the ability to have up to 24 nodes clustered together.

it's not a niche architecture like the scale-out NAS vendors'.

As a result, there are a couple of things the niche scale-out NAS architectures can do that NetApp can't, and about 100 things NetApp can do that the niche scale-out NAS vendors can't.

Deciding on what you need for your business depends on what features you need.

Read here for detailed analysis: http://bit.ly/uuK8tG

Thx

D

2
0

NetApp punches Isilon right in the scaled-out clusters

dikrek
Stop

No, we clustered 12x 6240 HA systems, not 24!

The 6080 in the submissions is a 2-controller system.

The 24 nodes come in 12 pairs.

So 12 systems to get 10x the performance is pretty darn good scalability for a scale-out architecture that has the systems connected over a back-end cluster network which itself imposes some drop in performance.

Is this more clear?

D

2
0
dikrek

Regarding the number of mount points

Looking at the Isilon submission:

"Uniform Access Rule Compliance

Each load-generating client hosted 64 processes. The assignment of processes to network interfaces was done such that they were evenly divided across all network paths to the storage controllers. The filesystem data was striped evenly across all disks and storage nodes."

How are the 64 processes per client being generated in your opinion? Single mount point per client?

0
0
dikrek

sure it's scale-out

Just not the exact same way Isilon does it. Doesn't necessarily tackle the same problems, either.

What do you mean we can't do "real quotas, replication or snapshots"? Those are things more deeply ingrained into the NetApp DNA than on any other platform :)

3
0
dikrek
Happy

It helps to understand the details

Hi Nate,

Short-stroking?? Isilon only used about 128TB of the 864TB available! Go to my post for more details. In general, exported vs usable means nothing performance-wise for NetApp due to the way WAFL works.

Isilon doesn't do typical RAID either, it's per file, so your observations are not quite correct. More education is needed to understand both architectures - and they are as different from each other as they could possibly be... and don't solve all the same problems.

Anyway - not sure what your affiliation is but if you're on the customer side of things you should be happy we are all trying to make faster, more reliable and more functional boxes for y'all! :)

D

0
0
dikrek
Thumb Up

It's important to realize it's really one cluster

Hi all, D from NetApp.

It's crucial to realize that the NetApp submission is one legit cluster with data access and movement capability all ove the cluster and a single management interface, and NOT NAS gateways on top of several unrelated block arrays.

See here for a full explanation, towards the end: http://bit.ly/uuK8tG

D

3
1

NetApp scores video benchmark wins

dikrek

I'm so misunderstood :)

We agree in general. The more "stuff" an array does, the more overhead there can be for some operations.

The simpler an array is, the easier it is to do some operations.

It's all a matter of tradeoffs, as the vendors that hitherto had "simple" arrays now realize how difficult it is to maintain high performance and add all the "stuff". Notice all the performance caveats from a certain large storage vendor when someone wants to implement autotiered storage pools with thin provisioning.

My statement remains, let me clarify:

You can get high sequential speeds out of FAS but it will cost a lot more to get them than doing it with Engenio.

And indeed that's one of the reasons we bought Engenio. A lot of environments don't need the extra "stuff" FAS has to offer, and just need a simpler box that is less expensive and can read/write sequentially very very quickly.

And, as a counterpoint, I still maintain that, given the amount of "stuff" FAS can do, the performance is unmatched.

It all depends on what you're trying to do.

0
0

Page:

Forums