12 posts • joined 20 Oct 2009
I wonder why NetApp is often considered a 'latecomer'...
The PAM card (16GB DRAM based Read Cache in a PCI slot) debuted in 2008 (ONTAP 7.3)
Flash Cache (called "PAM II" back then), which was NAND Flash, came in 2009. (ONTAP 7.3.2)
SSDs as disks were supported since ONTAP 8.0.1 in 2010. I've had students in my class who were happily running "All-Flash FAS" back then already. (SSDs for the root aggregate were supported later that year)
Not much had to change for Data ONTAP to support SSDs in an efficient way, since WAFL had handled disks just like SSDs already since 1992: delayed coagulated writes, spreading writes over the whole system (the proverbial "Write Anywhere File Layout", WAFL), delayed garbage collection, background File Layout Optimization ('reallocate'), and so on... Now finally other storage systems start doing the same things as a side-effect of adapting their systems to SSDs.
It's just that Marketing was late to the show and coined the term AFF (All-Flash FAS) fairly recently.
Same story with the EF-Series, by the way. Customers had been ordering E-Series systems with only SSDs already for a while, before the marketeers came up with the "EF" moniker.
By now NetApp has shipped more than 111PB of Flash.
Doesn't look like a Latecomer/Newcomer to me.
To me FlashRay is simply the new 'future-proof' OS-base, for when spinning disk is fading away.
It doesn't seem to me, that NetApp is under a lot of pressure to ship something 'flashy', since the available offerings (EF-Series for raw performance with storage management by the application, AFF for integration in existing NetApp FAS environments and rich on-box management features) are already covering a very broad base. The limited FlashRay release now I see more as a preview of things to come for selected stakeholders. I actually prefer the 'It's ready when it's ready' approach...
(disclosure: I'm a NetApp Certified Instructor, but I work for an independent Technical Training Company, not for NetApp)
Well, *Data ONTAP* offers only RAID-DP (and RAID 4, and RAID 0 (V-Series), and all of them mirrored, if you'd like). The 2% performance penalty (vs. buffered RAID-4) should be less than one additional disk...
If you want performance with less Data Management Overhead, compare with the NetApp E-Series. It offers a variety of RAID schemes, plus 'Dynamic Disk Pools' (RAID-6 8+2 disk slices) which dramatically reduce rebuild time and impact.
Or get a hybrid ONTAP config (FlashCache / FlashPool). I haven't seen a too-many-spindles for performance NetApp config in a long time...
Hmmm, seing that RAID-DP was introduced with ONTAP 6.5 a good 10 years ago and only became the default in ONTAP 7.3 (2009) I can't quite follow your reasoning.
I DO follow the reasoning, that being protected while rebuilding a disk (something that RAID 4/5/10 doesn't provide) with only <2% performance penalty (which you probably won't notice) is something you should do by default.
Scalability, the way I understand it, was provided with the advent of 64-Bit Aggregates in ONTAP 8.x, especially 8.1+.
Re: 4 Mio IOPS probably not too far off the mark...
I'm fairly sure they arrived at the 4 Mio IOPS number by taking the old 24*6240 SpecNFS results (https://www.spec.org/sfs2008/results/res2011q4/sfs2008-20111003-00198.html) and factoring in the increase in controller performance.
The old result had 3 shelves of SAS disks per controller, so there's no unrealistic expensive RAID 10 SSD config required, like with other vendors results. Also 23 of 24 accesses were 'indirect', meaning that they had to go through the 'cluster interconnect'. pNFS would have improved the results quite a bit, I'm sure.
The old series of benchmarks (4-node, 8-node, .. 20-node, 24-node) also showed linear scaling, so unless you'd saturate the cluster interconnect - which you can calculate easily - the 4 Mio IOPS number should be realistic for a fairly small sized (per controller) configuration. Real life configs (e.g. SuperMUC https://www.lrz.de/services/compute/supermuc/systemdescription/ ) will probably always use less nodes and more spindles per node.
Re: Very odd article...
The analogy with the storage unit fits well:
- If you're not constantly changing what's in there, you'll be fine with a unit that fits barely.
- But if you're constantly bringing in new stuff and try to exchange it against stuff that's all the way in the back, you'll wish you had paid for a bigger unit...
The biggest StorageGrid installation I've heard about is 800 PB on 3 continents. I forgot how many sites, but it was quite a few... so apparently some people DO put more than 100 PB in one namespace.
"ONTAP 8.1 could unify the separate 7G and 8.01 strands of ONTAP, and position NetApp to provide more scale-out features to cope with larger data sets."
Now really, Who would want to unify those two??
8.0.x comes in two flavours: 7-Mode (ex-7G) and Cluster-Mode (ex-GX). Same code-base, different environment variables to tell the system in which mode to start.
More Scale-Out? Cluster-Mode already gives you up to 28 PB (on 24 nodes) in a single namespace. I'm teaching NetApp and talked to a student 2 weeks ago, who will be setting up a new 5PB system in the coming weeks...
(8.0.2RC1 already supports 3TB drives which will proportionally increase the above numbers...)
"We might expect more use to be made of flash memory devices, something around data storage efficiency, and clustering improvements."
More flash? The 6280A already supports 8TB PCI-Flash. With 8.0.2 (RC already publicly available) this will increase to 16TB... SSD shelfs are supported for about a year already.
Storage Efficiency? Compression is available as of 8.0.1 (7-Mode). You can even combine it with deduplication where it makes sense.
No, 8.1 is simply the convergence of 8.0.x 7-Mode and Cluster-Mode with added benefits.
Didn't the benchmarks prove otherwise??
"Your cache becomes a bottleneck, because it isn't large enough to handle the competing storage access demands.
Better to build the whole array out of Flash, and (if necessary) use on-controller RAM for cache, methinks, than to use a flash cache with mechanicals hanging off the tail-end. "
The FlashCache on the NetApp system was only a fraction of the dataset used in the benchmark (~4.5% of the dataset, ~1.2% of the exported capacity), yet it had 172% more IOPs, delivering them with ~half the latency (198% faster). The system also had 488% the exported capacity at a fraction of the cost of the all-SSD system...
As opposed to the 'large sequential' workload, where predictive cache algorithms can really shine, I see only one use case for SSDs:
Unpredictably random read workloads, where the whole dataset fits into the SSD-provided space. I know one such application, NetApps with SSD-shelfs were deployed after extensively testing multiple vendor's solutions.
"that their systems are different in kind from mainstream clustered filers such as NetApp's FAS series. NetApp would disagree of course."
I guess NetApp would agree, actually. And sell you an ONTAP8 Cluster-Mode System. 24 nodes, 28PB capacity. If you need more, there's always NetApp's StorageGRID...
Automatic Sub-volume level movement??? Non-sensical on NetApp!
"The hints are strong that Data Motion is going to get automated and, hopefully, there will be sub-volume level movement to provide more efficient control of hot data placement, as NetApp's competitors do."
Now that makes no sense whatsoever on a NetApp. If only part of a volume is 'hot', already the second read-access to it would find it in (Flash-)Cache. Writes are always acknowledged to clients as soon as they're safe in NVMEM, so no speeding up necessary there. They're also automatically in Cache, so the next read would already find them there.
98% of use cases are transparently accelerated by Flash-Cache. You can put up to 8TB in a NetApp HA pair...
The other 2% (I know one case) can actually benefit from SSDs. But then it's a *whole* dataset that needs to be fast *all* the time, at the *first* access (because there will probably be no second any time soon) and you wouldn't want the risk of having to warm up the cache after a controller failover. That dataset would go to SSD as a whole, not 'sub-volume'...
I can see lot's of use-cases for automatic DataMotion, but (sub-volume) 'FAST' isn't one of them.
Disclosure: I'm a NCI, a technical Instructor, teaching people (among other things) how to use NetApp controllers. I like their technology, but I'm not an employee...
Deduplicate Zeros & Most Mature ?
I'd agree with 'most mature *hardware-based* thin provisioning'.
But if you compare the features described in the article with what NetApp offers in their boxes, it's a fraction only. Like
- specific space guarantees for volumes, LUNs, files
- volume autogrow
- snapshot (operational backup) autodelete
- deduplicating ANY 4K blocks of data (not just zeroes)
With all these safety measures built in, it's perfectly safe to use Thin Provisioning, provided you do a little monitoring, too.
I agree with Barry, Software is a lot more flexible, and these days pretty fast...
FCoE is an approved standard since June 3, 2009
You said: There are standardisation efforts with FCoE and data centre Ethernet afoot.
'On Wednesday June 3, 2009, the FC-BB-5 working group of T11 completed the development of the draft standard and unanimously approved it as the final standard. '
See also: http://www.fibrechannel.org/component/content/article/3-news/158-fcoe-standard-alert-fcoe-as-a-standard-is-official-as-stated-by-the-t11-technical-committee
- Analysis Windows 10: One for the suits, right Microsoft? Or so one THOUGHT
- Vid+Pics Microsoft WINDOWS 10: Seven ATE Nine. Or Eight did really
- Xbox hackers snared US ARMY APACHE GUNSHIP ware - Feds
- You dirty RAT! Hong Kong protesters infected by iOS, Android spyware
- Ice, ice maybe: Evidence of 'Grand Canyon' glacier FOUND ON MARS