Feeds

* Posts by SPGoetze

11 posts • joined 20 Oct 2009

NetApp gives its FAS range a 4 MILLION IOPS dose of spit'n'polish

SPGoetze

Re: Software

Well, *Data ONTAP* offers only RAID-DP (and RAID 4, and RAID 0 (V-Series), and all of them mirrored, if you'd like). The 2% performance penalty (vs. buffered RAID-4) should be less than one additional disk...

If you want performance with less Data Management Overhead, compare with the NetApp E-Series. It offers a variety of RAID schemes, plus 'Dynamic Disk Pools' (RAID-6 8+2 disk slices) which dramatically reduce rebuild time and impact.

Or get a hybrid ONTAP config (FlashCache / FlashPool). I haven't seen a too-many-spindles for performance NetApp config in a long time...

0
0
SPGoetze

Re: Software

Hmmm, seing that RAID-DP was introduced with ONTAP 6.5 a good 10 years ago and only became the default in ONTAP 7.3 (2009) I can't quite follow your reasoning.

I DO follow the reasoning, that being protected while rebuilding a disk (something that RAID 4/5/10 doesn't provide) with only <2% performance penalty (which you probably won't notice) is something you should do by default.

Scalability, the way I understand it, was provided with the advent of 64-Bit Aggregates in ONTAP 8.x, especially 8.1+.

0
0
SPGoetze

Re: 4 Mio IOPS probably not too far off the mark...

I'm fairly sure they arrived at the 4 Mio IOPS number by taking the old 24*6240 SpecNFS results (https://www.spec.org/sfs2008/results/res2011q4/sfs2008-20111003-00198.html) and factoring in the increase in controller performance.

The old result had 3 shelves of SAS disks per controller, so there's no unrealistic expensive RAID 10 SSD config required, like with other vendors results. Also 23 of 24 accesses were 'indirect', meaning that they had to go through the 'cluster interconnect'. pNFS would have improved the results quite a bit, I'm sure.

The old series of benchmarks (4-node, 8-node, .. 20-node, 24-node) also showed linear scaling, so unless you'd saturate the cluster interconnect - which you can calculate easily - the 4 Mio IOPS number should be realistic for a fairly small sized (per controller) configuration. Real life configs (e.g. SuperMUC https://www.lrz.de/services/compute/supermuc/systemdescription/ ) will probably always use less nodes and more spindles per node.

0
0

Storage rage: Like getting a nice steak and being told to only eat 80% of it

SPGoetze

Re: Very odd article...

The analogy with the storage unit fits well:

- If you're not constantly changing what's in there, you'll be fine with a unit that fits barely.

- But if you're constantly bringing in new stuff and try to exchange it against stuff that's all the way in the back, you'll wish you had paid for a bigger unit...

0
0

Is Object storage really appropriate for 100+ PB stores?

SPGoetze
Megaphone

800 PB...

The biggest StorageGrid installation I've heard about is 800 PB on 3 continents. I forgot how many sites, but it was quite a few... so apparently some people DO put more than 100 PB in one namespace.

0
0

Chatter foreshadows major NetApp ONTAP refresh

SPGoetze
Paris Hilton

Sloppy research...

"ONTAP 8.1 could unify the separate 7G and 8.01 strands of ONTAP, and position NetApp to provide more scale-out features to cope with larger data sets."

Now really, Who would want to unify those two??

8.0.x comes in two flavours: 7-Mode (ex-7G) and Cluster-Mode (ex-GX). Same code-base, different environment variables to tell the system in which mode to start.

More Scale-Out? Cluster-Mode already gives you up to 28 PB (on 24 nodes) in a single namespace. I'm teaching NetApp and talked to a student 2 weeks ago, who will be setting up a new 5PB system in the coming weeks...

(8.0.2RC1 already supports 3TB drives which will proportionally increase the above numbers...)

"We might expect more use to be made of flash memory devices, something around data storage efficiency, and clustering improvements."

More flash? The 6280A already supports 8TB PCI-Flash. With 8.0.2 (RC already publicly available) this will increase to 16TB... SSD shelfs are supported for about a year already.

Storage Efficiency? Compression is available as of 8.0.1 (7-Mode). You can even combine it with deduplication where it makes sense.

No, 8.1 is simply the convergence of 8.0.x 7-Mode and Cluster-Mode with added benefits.

0
0

NFS smackdown: NetApp knocks EMC out

SPGoetze
Megaphone

Didn't the benchmarks prove otherwise??

"Your cache becomes a bottleneck, because it isn't large enough to handle the competing storage access demands.

Better to build the whole array out of Flash, and (if necessary) use on-controller RAM for cache, methinks, than to use a flash cache with mechanicals hanging off the tail-end. "

The FlashCache on the NetApp system was only a fraction of the dataset used in the benchmark (~4.5% of the dataset, ~1.2% of the exported capacity), yet it had 172% more IOPs, delivering them with ~half the latency (198% faster). The system also had 488% the exported capacity at a fraction of the cost of the all-SSD system...

As opposed to the 'large sequential' workload, where predictive cache algorithms can really shine, I see only one use case for SSDs:

Unpredictably random read workloads, where the whole dataset fits into the SSD-provided space. I know one such application, NetApps with SSD-shelfs were deployed after extensively testing multiple vendor's solutions.

0
0

EMC buys Isilon for $2.25bn

SPGoetze
Alert

Cluster-Mode

"that their systems are different in kind from mainstream clustered filers such as NetApp's FAS series. NetApp would disagree of course."

I guess NetApp would agree, actually. And sell you an ONTAP8 Cluster-Mode System. 24 nodes, 28PB capacity. If you need more, there's always NetApp's StorageGRID...

0
0

NetApp adds SSDs and 2.5-inch drives

SPGoetze
Stop

Automatic Sub-volume level movement??? Non-sensical on NetApp!

"The hints are strong that Data Motion is going to get automated and, hopefully, there will be sub-volume level movement to provide more efficient control of hot data placement, as NetApp's competitors do."

Now that makes no sense whatsoever on a NetApp. If only part of a volume is 'hot', already the second read-access to it would find it in (Flash-)Cache. Writes are always acknowledged to clients as soon as they're safe in NVMEM, so no speeding up necessary there. They're also automatically in Cache, so the next read would already find them there.

98% of use cases are transparently accelerated by Flash-Cache. You can put up to 8TB in a NetApp HA pair...

The other 2% (I know one case) can actually benefit from SSDs. But then it's a *whole* dataset that needs to be fast *all* the time, at the *first* access (because there will probably be no second any time soon) and you wouldn't want the risk of having to warm up the cache after a controller failover. That dataset would go to SSD as a whole, not 'sub-volume'...

I can see lot's of use-cases for automatic DataMotion, but (sub-volume) 'FAST' isn't one of them.

Disclosure: I'm a NCI, a technical Instructor, teaching people (among other things) how to use NetApp controllers. I like their technology, but I'm not an employee...

0
0

3PAR zeroes in on wasted space

SPGoetze

Deduplicate Zeros & Most Mature ?

I'd agree with 'most mature *hardware-based* thin provisioning'.

But if you compare the features described in the article with what NetApp offers in their boxes, it's a fraction only. Like

- specific space guarantees for volumes, LUNs, files

- volume autogrow

- snapshot (operational backup) autodelete

- deduplicating ANY 4K blocks of data (not just zeroes)

With all these safety measures built in, it's perfectly safe to use Thin Provisioning, provided you do a little monitoring, too.

I agree with Barry, Software is a lot more flexible, and these days pretty fast...

0
0

Scale-out SVC on the way from IBM?

SPGoetze
Alert

FCoE is an approved standard since June 3, 2009

You said: There are standardisation efforts with FCoE and data centre Ethernet afoot.

'On Wednesday June 3, 2009, the FC-BB-5 working group of T11 completed the development of the draft standard and unanimously approved it as the final standard. '

See also: http://www.fibrechannel.org/component/content/article/3-news/158-fcoe-standard-alert-fcoe-as-a-standard-is-official-as-stated-by-the-t11-technical-committee

0
0