Feeds

* Posts by dikrek

41 posts • joined 16 Nov 2010

Just how much bang does a FAS8040 box give you for 500,000 bucks

dikrek
Stop

About the SPC-1 benchmark

Hello all, Dimitris from NetApp here (recoverymonkey.org). I posted some of the stuff below on Nate's site but it's also germane here.

FYI: The SPC-1 benchmark "IOPS" are at 60% writes and are NOT a uniform I/O size, nor are they all random. So, for whoever is comparing SPC-1 ops to generic IOPS listed by other vendors - please don't. It's not correct.

Some background on performance: http://recoverymonkey.org/2012/07/26/an-explanation-of-iops-and-latency/

I have plenty of SPC-1 analyses on my site. I’ll post something soon on the new ones…

BTW: The “Application Utilization” is the far more interesting metric. RAID10 systems will always have this under 50% since, by definition, RAID10 loses half the capacity.

The Application Utilization in the recent NetApp 8040 benchmark was 37.11%, similar and even better than many other systems (for example, the HDS VSP had an Application Utilization of 29.25%. The 3Par F400 result had a very good Application Utilization of 47.97%, the 3Par V800 was 40.22% and the 3Par 7400 had an Application Utilization of 35.23%.)

The fact that we can perform so fast on a benchmark that’s 60% writes, where every other vendor needs RAID10 to play, says something about NetApp technology.

Thx

D

1
1

NetApp shows off tech specs of FAS array BIZ BEAST

dikrek

NVRAM is NOT write cache

Hello all, Dimitris from NetApp here (recoverymonkey.org).

NVRAM is not a write cache. It's more analogous to a DB redo log.

The actual cache is a RAM + SSD combo and is measured in the TB.

Plus we have the free to download Flash Accel server-based cache that further augments array cache.

And no, we do write coalescing very differently than most arrays.

Thx

D

1
1

NetApp musters muscular cluster bluster for ONTAP busters

dikrek
Angel

Re: Clarification Re the cache

There are 2 forms of Flash Caching possible within a NetApp FAS system.

1. Flash Cache. Custom boards that slot into the controller. Upon normal failover, the cache contents are preserved and there's no need for re-warming upon fail-back. But since it's within a node, if you lose that node you lose part of your usable cache.

2. Flash Pool. SSD-based (lives in the disk shelves). This is per disk pool and follows disk pools around if a node goes down. Never needs re-warming no matter the type of outage.

Nate, I think #2 is what you're after. Yes we have it.

Thx

D

1
1
dikrek
Thumb Up

Clarification Re the cache

Hi Chris, Dimitris from NetApp here.

To clarify: the size of a SINGLE cache board can be up to 2TB.

You can have several of those in a system.

Max Flash Cache is 16TB usable (before dedupe) on a 6280 HA pair.

Then that times 4 for an 8-controller cluster... :)

Thx

D

1
0
dikrek
Gimp

Re: Aggregate up to 400TB - LUN Size?

Howdy, Dimitris from NetApp here (www.recoverymonkey.org).

Max LUN size: 16TB.

Max exported share: 20PB.

Mr. Draco - I did check all your previous posts here on the register. Highly critical of everyone but one vendor. It's good etiquette to disclose affiliation.

Thx

D

2
0

NetApp snoozing at the wheel of incumbency juggernaut, says chap

dikrek

Re: Interesting that this gets posted during NetApp's fiscal year end

Oh - maybe the analysts should read this:

http://www.barnetttalks.com/2013/03/the-slow-incumbent-myth.html

0
0
dikrek

Interesting that this gets posted during NetApp's fiscal year end

Hello all,

Dimitris from NetApp here (www.recoverymonkey.org)

I did find it interesting that such a "doom and gloom" article is posted during NetApp's fiscal year end.

Sure - we are an incumbent now, having the #1 storage OS in the world: http://searchstorage.techtarget.com/NetApp-Data-ONTAP-Is-Now-the-Industrys-No-1-Storage-OS (this is by revenue - the same holds by sheer capacity, too).

It's similar to the attacks Apple receives. Another company that innovated and was "different" and now is bigger than most. And they're not going anywhere despite all the attacks. And they're not stopping to innovate.

Innovation needs to be followed by a maturity period to encourage broad adoption of said innovation.

The various flash arrays out there for example offer innovation but not maturity yet - and these waves of innovation and maturity can take years.

Check out the "Hype Cycle": http://en.wikipedia.org/wiki/Hype_cycle

Thx

D

0
0

How many VMs can you stuff in that box? How to get into the VDI biz

dikrek
Angel

Storage is just one aspect of overall VDI cost

Hi all, Dimitris from NetApp here (www.recoverymonkey.org).

No flogging of wares (in either form of the word's meaning).

Just wanted to mention that storage is just one aspect of overall VDI cost. You have OS licenses, connection broker licenses, hypervisor licenses, servers to be purchased, thin devices, really solid networking, and all manner of other stuff that's usually not free.

Focusing so much on the storage is like saying:

I want to build a blender, and I'm just focusing on the motor.

A blender as a solution is more than the motor. Yes, a reliable motor is really important, but you need a solid housing, the bowl, blades, motor control, and a reliable way to transmit the motor movement to the blades.

D

0
0

NetApp: Flash as a STORAGE tier? You must be joking

dikrek
Coat

About NetApp performance

Hi all, Dimitris from NetApp here.

It would be nice if the various anonymouns posters that seem to speak with such authority divulged who they are and where they work (and even nicer if it were true).

NetApp systems run the largest DBs on the planet, including a bunch of write-intensive and latency-sensitive workloads.

ONTAP/WAFL holds its own just fine: http://bit.ly/Mp4uu1

In addition, it's all about maintaining performance without giving up advanced data management features and tight application integration.

Like dragster cars are made to race in a straight line for a short period of time, there exist arrays that are super-fast but are not reliable and/or do not have rich data management features.

Flash is just another way to get more speed for some scenarios.

Depending on where it's located can help, too.

I don't care what array you have at the back end, at some point it runs out of controller steam.

What if you could have 2000 application hosts each with their own augmented cache that is aware of what's happening in the back-end storage?

Would those 2000 hosts not have more aggregate performance potential than any array can handle?

D

0
2

CEO: NetApp revenues falling, but 'we aren't in a clampdown'

dikrek
Stop

It's not about file serving, it's about data serving

Hello all, Dimitris from NetApp here.

Indeed we invented unified storage and it has been thus for 11 years now. There's no extra box needed to do various protocols and replication - the same OS does it all. Putting a gateway in front of a storage system doesn't necessarily create a coherent solution (invariably the block and file bits end up having way too different capabilities).

There are actual customer benefits to true unified storage:

http://bit.ly/MTCktG and

http://bit.ly/MfPLyc

Reliability and performance are great.

By default there's protection against lost writes, torn pages and misplaced writes, in addition to standard dual-parity write-accelerated RAID.

No, it's more about a difference in philosophy. NetApp does few acquisitions and prefers to do stuff in-house, the rest either OEM NetApp gear and/or rely heavily on acquisition.

D

2
1

NetApp: Steenkin' benchmarks – we're quicker than 3PAR

dikrek
Stop

Explanation of 100% load

Hello all, Dimitris from NetApp here (the person that wrote the article Chris referenced).

It's here: http://bit.ly/Mp4uu0

Anyway, to clarify since there seems to be some confusion:

The 100% load point is a bit of a misnomer since it doesn't really

mean the arrays tested were maxed out. Indeed, most of the arrays

mentioned could sustain a bigger workload given higher latencies. 3Par

just decided to show what the IOPS at that much higher latency level would be.

The SPC-1 load generators are told to run at a specific target IOPS and

that is chosen to be the load level, the goal being to balance cost, IOPS

and latency.

So yes, it absolutely makes sense to look at the gear necessary to achieve the requisite IOPS vs latency, and the cost to do so.

Databases like their low latency.

And yes, all-SSD arrays will of course provide overall lower latency - usually though one can't afford a reliable, enterprise all-SSD array for a significant amount of storage. You think the prices listed are high? Think again.

What NetApp does with the combination of megacaching and WAFL write acceleration is provide a lot of the benefit of an all-flash architecture at a far more palatable cost (especially given all the other features NetApp has).

D

1
2

NetApp leapfrogs IBM in storage race for second place

dikrek
Megaphone

The NetApp numbers are even better than stated

D from NetApp here...

The IDC numbers don't count the tons of OEM NetApp boxes sold by many companies.

The numbers shown count only the NetApp-branded boxes sold by NetApp.

Once you add all the OEM business the percentage grows quite a bit.

D

0
0

Isilon Maverick flyboy in EMC World flyby

dikrek
Stop

NetApp does have a big SPEC SFS result for scale out

Hello all, D from NetApp here.

Check analysis here: http://bit.ly/K2FBz1

Unfortunately most people don't understand how to read the SPEC submissions, I hope my article clarifies it a bit.

D

0
0

Did EMC buy Xtremio to fend off NetApp

dikrek
Megaphone

Re: Isilon slow?

No, the highest Isilon result published is 1112705 SPEC SFS NFS 2008 ops.

Not 1.6 million.

Linky:

http://bit.ly/s1IFH6

0
0
dikrek
Stop

Everyone is a WAFL expert all of a sudden... :)

J.T. - I don't think you quite understand how ONTAP works. It doesn't write randomly - it actually takes great care in meticulously selecting where writes will go. When writing to flash, we don't need to be quite as meticulous. But for HDD it helps a lot.

In addition, there are important technologies like read reallocate, that will sequentialize upon a read data that was written randomly.

At a block level.

Amazing for databases - where frequently people will write stuff randomly in the DB then there will be a job that needs to read the data sequentially (sequential read after random write).

I'm not aware of any other disk array that will do this optiization for the end user, and leave the blocks optimized for the future (this has nothing to do with caching and readahead).

Not to mention insane new stuff coming in the next few months.

Unfortunately, way too many people think ONTAP is still where it was 20 years ago. Or maybe fortunately, in the case of competitors, since it's so easy to discredit them... :)

The write allocator has been rewritten multiple times in the last 20 years :)

Not to mention everything else, including the entire kernel.

Very relevant with respect to competitor documentation - I often see stuff from them that was maybe a little valid 10 years ago, especially from the smaller vendors that can't afford the resources necessary to understand other people's gear.

This is IT, folks. I'd argue that if you stop intimately understanding a technology for more than 2 years, your knowledge of it is completely obsolete, to the point of being dangerously so.

Here's some fun reading:

http://bit.ly/IVr0Xy

D

0
2
dikrek
Stop

Apples and Oranges

Hi all, Dimitris from NetApp here.

@Nate:

Indeed, ONTAP running in Cluster-Mode and Isilon aren't really designed in similar ways, as I'm explaining towards the end of the article here: http://bit.ly/uuK8tG

Isilon will be strong for high throughput, yet there are other ways to get much higher throughput: http://bit.ly/zqqJ3G

Ultimately, there is no single system that "does it all". Meaning, that you get all the protocols, and dedupe, and fancy app integration, and fancy snaps, and fancy replication, and be able to stripe a volume across umpteen controllers at low latency for random I/O.

If you want to run DBs or VMs at low latency and take advantage of all the cool features mentioned above and be able to get some flexibility with the cluster for migrations and upgrades at zero downtime, you'd be better off using ONTAP Cluster-Mode than Isilon, SONAS, etc.

If you have a workload that needs to be in a single gigantic container bigger than 100TB, with extremely high throughput requirements (not IOPS),and can't break up that workload into smaller than 100TB chunks, then, for now, there are alternative solutions.

Remember that ONTAP Cluster-Mode is designed to be used in general-purpose scenarios.

D

0
2

IDC Storage Tracker: NetApp is losing market share

dikrek

It helps to understand where the numbers are coming from

Hi All, Dimitris from NetApp here.

According to IDC figures, if one looks at the Open Systems Networked Storage marketshare (where NetApp plays, please let's discount storage sold inside servers), NetApp remains #2 after EMC, even though EMC has a plethora of products that are counted yet are not general-purpose storage appliances.

The NetApp share was 14.5% according to IDC, up from 13.3% a year ago.

That’s not called “losing market share”.

Ant Evans got it right. What the figures don’t show is that IDC doesn’t count the ton of NetApp OEM product resold by the likes of IBM, Oracle, Dell, and more. Instead, those sales show up as IBM, Oracle and Dell sales.

Once that’s taken into account, the NetApp share grows to over 21%, or a very very big bump up indeed.

“Lies, damn lies and statistics”..

EMC is still higher at 31.9%, but the gap is narrowing…

Oh, and according to IDC figures, NetApp is #1 in the replication software marketshare, ahead of EMC. 32.7% vs 31.4%. Close, but if one considers the relative size of the 2 companies, and how many EMC products are counted in that percentage, the NetApp figure is even more impressive.

Thoughts?

Thx

D

0
1

Fusion-io demos billion IOPS server config

dikrek

I understand how flash works...

However, 1 billion IOPS at 64 bytes will not translate to 1 billion IOPS at 8K, so something like SQL won't run at 1 billion IOPS on that config.

It has nothing to do with whether it's disks or not.

0
0
dikrek

Did anyone miss that they were doing 64-byte I/O?

Hello all, D from NetApp here.

Bear in mind they were doing 64-byte I/O transfers?

Typical apps, especially DBs, operate at 4K + (mostly 8K + lately).

Probably not a billion IOPS in that case then... :)

D

0
0

iPhone 4S is for failures who work in coffee shops - Samsung

dikrek
Stop

the best device is the one you like using

I was a Blackberry user for years, then moved to the iPhone.

Plenty of things annoy me about the iPhone but overall I like it far more than the 'berry.

I tried other stuff, including Android, and it was missing something FOR ME.

Either the build was not good, or the screen not nice, or too big, phone not clear enough - overall, I couldn't find an Android device I enjoyed using OVERALL.

Not about being a conformist, on the contrary, I love supporting the underdog, as long as the underdog has something special that can enhance my day by day life.

I think Microsoft may have a chance against the iPhone with their new paradigm UI in Windows 7.5 Mango, it's a fresh approach.

I'll let fanbois be fanbois and go back to my 4S.

I upgraded from a 3GS (that I used the hell out of and it still looks like new) and I really like the new phone.

No battery issues, I get easily 2x the life of my 3GS.

The screen is incredible.

The graphics speed way better than any other phone out there (for the Android fanbois, why don't you check out some benchmarks and some actual games).

The upgrade process was as seamless as possible, all automated.

It's not just the device - it's everything supporting the device.

The only thing I'd change?

I still don't like the fact that it's essentially 2 pieces of glass with a metal spacer in between.

Unless you have a case it will break with a high degree of certainty if it falls on something hard.

D

4
4

NetApp accused of short-stroking its new hardness

dikrek
Stop

Isilon short-stroked more

D from NetApp here...

Please everyone - you can go ahead to the spec.org site and read both submissions. Tons of detail without he-said-she-said theatricals.

NetApp:http://bit.ly/utDOQR

Isilon: http://bit.ly/s1IFH6

Isilon used about 1/7th of all available space (counting over 800TB of space).

NetApp used about 1/3rd of all available space (counting over 500TB of space).

All clearly stated in the official submissions.

Do the math.

Anyway, the point of the NetApp architecture is primarily that it's a general-purpose unified storage system, with the ability to have up to 24 nodes clustered together.

it's not a niche architecture like the scale-out NAS vendors'.

As a result, there are a couple of things the niche scale-out NAS architectures can do that NetApp can't, and about 100 things NetApp can do that the niche scale-out NAS vendors can't.

Deciding on what you need for your business depends on what features you need.

Read here for detailed analysis: http://bit.ly/uuK8tG

Thx

D

2
0

NetApp punches Isilon right in the scaled-out clusters

dikrek
Stop

No, we clustered 12x 6240 HA systems, not 24!

The 6080 in the submissions is a 2-controller system.

The 24 nodes come in 12 pairs.

So 12 systems to get 10x the performance is pretty darn good scalability for a scale-out architecture that has the systems connected over a back-end cluster network which itself imposes some drop in performance.

Is this more clear?

D

2
0
dikrek

Regarding the number of mount points

Looking at the Isilon submission:

"Uniform Access Rule Compliance

Each load-generating client hosted 64 processes. The assignment of processes to network interfaces was done such that they were evenly divided across all network paths to the storage controllers. The filesystem data was striped evenly across all disks and storage nodes."

How are the 64 processes per client being generated in your opinion? Single mount point per client?

0
0
dikrek

sure it's scale-out

Just not the exact same way Isilon does it. Doesn't necessarily tackle the same problems, either.

What do you mean we can't do "real quotas, replication or snapshots"? Those are things more deeply ingrained into the NetApp DNA than on any other platform :)

3
0
dikrek
Happy

It helps to understand the details

Hi Nate,

Short-stroking?? Isilon only used about 128TB of the 864TB available! Go to my post for more details. In general, exported vs usable means nothing performance-wise for NetApp due to the way WAFL works.

Isilon doesn't do typical RAID either, it's per file, so your observations are not quite correct. More education is needed to understand both architectures - and they are as different from each other as they could possibly be... and don't solve all the same problems.

Anyway - not sure what your affiliation is but if you're on the customer side of things you should be happy we are all trying to make faster, more reliable and more functional boxes for y'all! :)

D

0
0
dikrek
Thumb Up

It's important to realize it's really one cluster

Hi all, D from NetApp.

It's crucial to realize that the NetApp submission is one legit cluster with data access and movement capability all ove the cluster and a single management interface, and NOT NAS gateways on top of several unrelated block arrays.

See here for a full explanation, towards the end: http://bit.ly/uuK8tG

D

3
1

NetApp scores video benchmark wins

dikrek

I'm so misunderstood :)

We agree in general. The more "stuff" an array does, the more overhead there can be for some operations.

The simpler an array is, the easier it is to do some operations.

It's all a matter of tradeoffs, as the vendors that hitherto had "simple" arrays now realize how difficult it is to maintain high performance and add all the "stuff". Notice all the performance caveats from a certain large storage vendor when someone wants to implement autotiered storage pools with thin provisioning.

My statement remains, let me clarify:

You can get high sequential speeds out of FAS but it will cost a lot more to get them than doing it with Engenio.

And indeed that's one of the reasons we bought Engenio. A lot of environments don't need the extra "stuff" FAS has to offer, and just need a simpler box that is less expensive and can read/write sequentially very very quickly.

And, as a counterpoint, I still maintain that, given the amount of "stuff" FAS can do, the performance is unmatched.

It all depends on what you're trying to do.

0
0
dikrek

NetApp E-series is simply optimized for sequential I/O

Hey Chris, D from NetApp here...

Regarding your comment: "El Reg's understanding is that FAS ONTAP arrays do not have the sheer data access speed needed by such heavy and scale-out filer workloads."

Far from it - the ONTAP arrays have plenty of speed, but the Engenio-based E5460, for the money, pretty much smokes anything out there for sequential I/O. Note the "for the money".

I think there are faster boxes for sequential I/O than the E5460 but not at that density and cost, which is what makes it compelling.

1
0

IBM's unified V7000 will hook up with just about anything

dikrek

architecture does matter

Management is not the same as architecture.

Let me illustrate a point:

With gateway-based NAS boxes (of which VNX is one), the gateway is effectively a server that is given LUNs by the block controllers, and keeps those LUNs in perpetuity (since striped filesystems are laid on said LUNs you can't reduce the size, and you can't thin the LUNs themselves).

So, let's say you get a VNX and allocate out of the storage pool 50TB to NAS and 50TB to block.

You can't reduce the 50TB you gave the NAS unless you are willing to destroy all of it and re-provision.

I've had customers in exactly that scenario, they'd purchased a ton of disk, allocated it to the NAS heads, realized they didn't need all that capacity, but weren't able to recover the disk.

In a TRULY unified system, the box (a single box) does it all, and doesn't care whether it's NAS or block, so you can very fluidly allocate stuff.

D

0
0
dikrek
Stop

It seems there is some confusion about what "unified" means

Dimitris from NetApp here...

At NetApp, "unified" means that the exact same controller, UNDER A SINGLE STORAGE OS, does ALL the following:

- RAID

- FC

- iSCSI

- CIFS

- NFS

- Replication

- Compression

- Deduplication

- Mega-caching

...and more.

Clearly we've had tremendous success with this architecture, to the point that all the other vendors that plonk a NAS gateway on top of a separate block storage device plus add replication appliances are now calling this approach also "unified".

So "unified" seems to mean "in the same rack" for some folks.

D

0
0

HP boasts of 3PAR benchmark boost

dikrek
Stop

Please learn how to interpret $/IOP for SPC-1

D from NetApp here.

3Par's $6.5/IOP are after the 50% discount. Other vendors state list prices. So please look at the full disclosure in the SPC page in order to understand the pricing, and calculate $/IOP based on list to keep it apples to apples.

Even though list pricing means less than nothing these days...

Good result overall even if almost 2,000 disks were needed, would be interesting to see it with RAID6 to provide a similar level of protection and efficiency to NetApp :)

0
2

Oracle to NetApp: 'I'm a faster, cheaper storage lover'

dikrek

Understanding how to read the SPC-1 submissions is important...

Hi all, Dimitris from NetApp here (www.recoverymonkey.org).

A few facts:

1. Like most other manufacturers, Oracle used RAID10 for the submission, and 2.5 times the number of drives NetApp used (and tons more cache and SSD etc etc but I digress).

2. The Oracle prices included a discount

3. NetApp always uses RAID-DP (protection equivalent to RAID6, meaning better than RAID10). Good space efficiency and best protection.

4. Apples-to-apples would be if Oracle used RAIDZ2 (similar protection to RAID6 and RAID-DP).

5. In the write-heavy SPC-1 benchmark (over 60% writes for the person that asked before), RAID6 behaves a LOT slower than RAID10. Which explains why most vendors don't choose to show those results.

6. The NetApp submission is a midrange controller - we have 3 (three) boxes progressively faster than the one in the SPC-1 submission... :)

But I admit, the headline is attention-grabbing.

Thx

D

0
0

NetApp gloats over storming fourth quarter

dikrek
Happy

NetApp keeps delivering

Hi all, D from NetApp here (recoverymonkey,org).

Thanks for the article.

And to mr. Anonymous that thinks it's a tough act to follow:

Competitors have been saying that about our earnings for years, "They'll never do that again next year!" is a phrase oft-repeated.

However, NetApp keeps growing a lot year over year.

We must be doing something right...

D

0
0

Pillar pillages SPC-1 benchmark

dikrek

You can't compare RAID6/RAID-DP and RAID10

When comparing NetApp numbers to any other vendor, you need to be aware of the fact that nearly all other vendors benchmark with RAID10, yet NetApp sticks to RAID-DP (mathematically the same protection as RAID6).

RAID-DP/RAID6 have better protection than RAID10.

So, when comparing, kindly ask the other vendors to show numbers with RAID6, otherwise you are comparing RAID10 (extreme space inefficiency, good performance, good protection) to RAID-DP (good space efficiency, good performance, best protection).

D

0
1
dikrek

The NetApp result is all about efficiency

Mr. IO-IO...

I encourage you to read the full disclosure from each vendor, so you better understand how things are tested for SPC-1.

NetApp tries to get the most IOPS with the LEAST number of disks.

So, the FAS3270 got 68K IOPS with effectively RAID6 and only 120 disks.

Pillar got about the same IOPS with over 300 disks and RAID10.

The old NetApp 3170 got 60K IOPS with 224 disks and more latency.

So, the 3270 got more IOPS with about half the disks.

I kinda call that improvement :)

D

0
0
dikrek

NetApp does have a recent result

Hi all, D from NetApp here (www.recoverymonkey.org).

The NetApp result for SPC-1E is the same as SPC-1 just has extra calculations for energy efficiency. Otherwise it's the same exact benchmark.

So here's the link:

http://www.storageperformance.org/benchmark_results_files/SPC-1E/NetApp/AE00004_NetApp_FAS3270A/ae00004_NetApp_FAS3270A_SPC1E_executive-summary.pdf

or a bit.ly shortened one:

http://bit.ly/beR5z3

So NetApp got 68K IOPS with only 120 disks and the disks were 84% full, and using RAID-DP.

Far better space efficiency than any other vendor in the benchmark (do the math).

D

0
0

NetApp becomes Quantum reseller

dikrek
WTF?

Do you even know what cluster mode does?

Mr. Anonymous Coward...

Dimitris from NetApp here.

NetApp hosts the biggest workloads around (NAS or SAN), and scales just fine. StorNext is NOT scale-out NAS... instead, it's a block-based cluster FS for very specific applications. Not a general-purpose solution.

FYI, Isilon and other scale-out solutions are not for all workloads - scale-out storage doesn't necessarily mean scale-out random write small-block I/O, for example. But it is usually good for reads, especially large files, and allows easy to achieve single large namespaces.

You can't even begin to compare how many customers NetApp and Isilon have respectively. NetApp is one of the top storage manufacturers and makes general-purpose storage, Isilon has a few niche customers and makes scale-out NAS.

D

1
0

BlueArc puts replication on a diet

dikrek
Happy

Yesterday's technology today!

D from NetApp...

NetApp has had thin replication relying on only sending actual changed blocks - since it was first launched many years ago.

It is indeed a very useful feature and good for Bluearc that they finally have it.

D

0
0

EMC races to catch up with NetApp

dikrek

EMC also does block emulation

In EMC documentation it is CLEARLY STATED that the new pools (mandatory to use the new features) rely on a filesystem on top of RAID groups.

Which is exactly how NetApp also works.

There's no "native block" - all modern arrays virtualize most things to the extent that the statement makes zero sense. See here: http://blogs.netapp.com/efficiency/2010/10/more-questions-than-answers-emulated-luns.html

D

0
0
dikrek

There are more things missing from EMC's portfolio

Dedupe is not the only gap.

Zero-impact snapshots and clones, ultra-granular snaps and caching and low-impact RAID6 are other hugely important technologies that differentiate NetApp from the rest.

Oh - and truly unified hardware.

D

0
0

NFS smackdown: NetApp knocks EMC out

dikrek

Small correction - 3270 was not using Flash Cache :)

Hi, Dimitris from NetApp here.

The 3270 wasn't using Flash Cache at all, we just wanted to have a result without it.

The 3210 and 6240 both were, and it helps with latency a lot as you can see.

What is also important: All the NetApp configs were a fraction of the price of the EMC ones and provided many times the amount of space.

More details here http://bit.ly/awIYXz and here http://bit.ly/bJZpRD

D

0
0