28 posts • joined Tuesday 1st December 2009 17:55 GMT
<quote from expert>
...with more modern architectures that often target NetApp deployments (ONTAP is more than 15 years old).
Oh dear, so it is. Almost as old as Linux, which appeared in 1991. All those "modern architecture" systems based on a 1991 Linux. Or BSD based systems; BSD is (gasp!) even older!
Yes, I understand that analysts have to say something, especially when they're not sure about the business or the technology or the economic outlook. But it would really help if their opinion was based on a more meaningful analysis. ONTAP 8 is a modern clustered scale out system quite unlike anything else on the market. Mr Blair, please try again.
(Standard NetApp employee disclosure)
Re: Benchmark shpenshmark
(It's "cited", not "sited"; using the wrong word doesn't help your argument.)
We do have a flash strategy and a lot more varieties of flash (FlashCache -- big caches made of flash, FlashPools -- mixed pools of SSD and disk, EF540 -- all flash array) than Hitachi has; you just haven't been paying attention.
What we don't have is a recent benchmark with flash SSD, which is a good point.
Wait a doggone minute there...
One correction ; Netapp haven't "given up" on tuning with flash. NetApp was one of the first to employ Flash based caches in 2009 with FlashCache; that's 4 years ago. We still ship and use both caches and SSDs for performance in the latest Ontap 8, along with the all-flash EF540 and the soon to be delivered FlashRay.
It's interesting that you think that the startups you mention all want to be like NetApp when they grow up -- or perhaps that's what they told you? -- for it's certainly what they want you to believe.
The truth is they don't want to be like NetApp. No way. Not in a month of Sundays. That would mean putting in some hard work for the long term, and none of them are in it for the long term.
NetApp has got a whole set of experiences and innovation in this space built over decades of investment and learning that these new guys on the block just don't have. And won't have and don't want either, since their business model (as you note) is a desire to get bought and make the principals some money.
Of course the storage industry is looking at this. Chris, I told you so at SNW in October last year, but you wanted to grill me about the "bump in the wire" cache vendors... Ah well.
For posters here, don't get carried away by memory storage schemes and thinking that it's a solved problem, or even an easy problem to solve. It's not.
For anyone that's interested, here's the background and what's being done; http://snia.org/forums/sssi/nvmp, and in particular, this presentation; https://intel.activeevents.com/sf13/connect/fileDownload/session/461EB56CC073EA43BDFCEC22AE2D3C88/SF13_CLDS009_100.pdf
Next year's tech? Probably not. But it will come; byte addressed, persistent and cheap memory is just too attractive given what we have now. More information can be had by contacting the NVM group at SNIA.
Re: NetApp is 24 NODES vs HDS 4 NODES
I'm not surprised that the HDS box did as well as it did, given that both NetApp benchmarks were submitted in September 2011 (2 years ago) and didn't have the benefit of SSDs. The 6240 is no longer sold.
Re: AAAARGGHHH! They're NOT IOPS!
What do you mean by "file I/Os per second, instead of disk block"?
NFS operations are not "file I/Os" since a large chunk of them -- 72% as I pointed out earlier -- are not operations on the file at all. This sort of sloppy thinking leads to no more than the death of another kitten.
PS; I'm sure all the marketing suits at HDS are really delighted that the rebranding from BlueArc to HUS that they slaved over all these years ago has completely escaped you. Mind you, that's excusable. No kittens died for that mistake.
AAAARGGHHH! They're NOT IOPS!
A little furry kitten dies in the benchmark labs every time someone quotes IOPS on a SPEC SFS.
SPEC SFS does not measure IOPS. SANs do IOPS, and the benchmark for them is SPC-1. NFS systems do NFS operations, and SPEC SFS measures them.
Only 28% of the operations in SPEC SFS are a READ or a WRITE operation (that is, accessing the data). The other 72% of the operations are on meta data (mainly directory information).
There's a big difference. Please, think of the little kitties.
Can't make it, which is a shame. Knieriemen is owe me at least one drink, I'm sure.
I liked the capacity gag above. How about
"I only came here to get HAMRed"
"There's a flasher or two in here tonight"
"No thanks, I'm the nominated drive."
"Run boys, it's a raid..."
<quote>If you just want to dump some data somewhere though, any old cheap storage will do.</quote>
Uh, no. Reality sucks, Mr pPPPP. Bit if you just want to lose your data, any old cheap storage will do.
Correction; Linux O_DIRECT is not Windows Direct I/O
and the Parallel NFS (pNFS) client is still in tech preview; the latter includes support for Microsoft's Direct I/O, which allows for data to be read from disk to application buffers without stopping at file buffers
I've spoken to the Linux developer of the NFS client that's part of the standard distributions, including RHEL6. The support is for Linux O_DIRECT in the pNFS code line, and has nothing to do with Microsoft's Direct I/O. That is a specific Windows OS feature for Windows device drivers.
Alex McDonald, CTO Office, NetApp.
Re: Re: As author of this article...
Buy a decent NFS box...
As author of this article...
There's a shedload of work taking place in Windows, and there's a fully fledged open source client available right now; download it from here http://www.citi.umich.edu/projects/nfsv4/windows/readme.html
Re: Server-Side Copy
Unlike FXP it's secure. The client has to inform both the source and target of its security credentials, and that the source will be contacted by a specific target.
What's tiering got to do with it?
Terje Mathisen (a well known programming optimization guru) once said: "All programming is an exercise in caching."
The same applies here. All data management is an exercise in caching.
This isn't about tiering; it's about caching. Unless, of course, you've got a product that does only tiering, in which case it makes a lot of sense to confuse the two.
To clarify my comment;
He pointed out that SolidFire deduplication is global, working across all volumes, whereas "NetApp ASIS only dedupes on a per-volume basis in an array and not across volumes in an array." Alex McDonald from NetApp's office of the CTO confirmed this but said NetApp can have many, many LUNS in a volume.
Since you can have lots of things (LUNs or filesystems) inside a NetApp volume, and a volume is a virtual construct that can be very large indeed -- several 10s of TB -- then global dedupe really doesn't buy you much saved space if you're already deduping across that much data.
Good on SolidFire though to recognize that we're the storage system of choice. ;-)
Alex McDonald of NetApp here.
Chris, my apologies; I promised you some reasoned arguments and background information as to why EMC/Isilon appear to be misunderstanding the specSFS benchmarks. Since you've published, I'm replying here.
Twomey of EMC makes one valid point; "Scale-out means different things to Isilon, [IBM] SONAS, [HP] IBRIX and NetApp." But this isn't about definitions or about what we each mean by scale-out or scale-up or scale-anything; it's about scale -- full stop -- and a benchmark which is tightly defined (and where we spanked EMC). The rest of his arguments are, as usual, diversionary nonsense. What's eating Twomey is the fact that NetApp's submission was smaller, cheaper and faster.
But I am surprised at Peglar, the America's CTO (Chief Technology Officer) of Isilon, because he betrays a serious misunderstanding of the benchmark, and I'm surprised that he isn't better informed. Here's what he should know.
The specSFS benchmark creates 120MB of dataset for every requested NFS operation. You can't control how much space the benchmark is going to use -- in fact, the usual complaint is how big the SFS dataset size is. We (NetApp) chose a volume size of 12TB for each volume giving 288TB. The main number to look at for the benchmark is the file set size created which was 176176GB (176TB) for the 24 node test. We could have created much bigger volumes and could have exported the capacity of the entire system at 777TB. Which would have not made a difference to the results; since the fileset size created would *still* have been 176TB.
Isilon exported all the usable capacity. 864TB. The benchmark dataset size for them was 128889GB (129TB).
So, on inspection, it took Isilon 3,360 10K rpm disk drives (plus 42TB of flash SSDs) to service 129TB of data. NetApp took 1,728 15k rpm disk drives (plus 12TB of flash cache) to service 176TB of data.
Now who's short stroking?
There are two arguments un-informed arguments we hear about benchmarks all the time, and I thought Peglar would have understood them and why they aren't relevant.
Argument 1: If one doesn't touch every byte of the exported capacity then the system is being gamed, so as to short stroke the disks and gain an unfair advantage.
Response 2: There will never be any real world workload that touches *every single byte* of all available capacity. That is not the way systems have, or will ever be used. Benchmarks model a realistic workload and measure systems under that load, not bizarre edge cases.
Argument 2: Creating LUNs that are smaller than the maximum capacity is creating short stroking and an unfair advantage.
Response 2: Modern filesystems no longer couple the data layout with the exported capacity. Thus, there is no performance advantage that is related to LUN size or the exported capacity. As long as the same amount of data is accessed across systems then the equal performance comparison is valid; or, as in the NetApp submission, where a *lot* more data is being accessed, the benchmark demonstrates it's a much better performer. If you are seeing a difference in performance that is coupled to exported capacity, you might want to consider a NetApp system that does not have such an antiquated data layout mechanism.
Summary: The total exported capacity is the combined capacity of the volumes created. It does not have any bearing on the performance obtained.
The argument Peglar makes would seem to indicate that Isilon may have one of those old, steam-driven data layouts. But, of course, an Isilon system doesn't, so why he's making the points he does is beyond me. There are only a couple of reasons that EMC/Isilon could present an invalid premise for an argument; (1) they don't understand the subject material, and lack experience in debating these issues, or (2) they fully understand the subject material and believe that the person they are trying to convince does not.
I'll let you guess as to which I think is the case.
I'm assuming heavy irony here...
...since you managed a sh1t in something the children might read. Either that, or you're an inconsistent daft laddie.
HP can't fix this; they sealed their fate years ago...
when they forgot to make any continuing investments in storage in the mid 2000s.
Then Dell-the-server-company bought EqualLogic, and HP-the-server-company woke up with a start & bought LeftHand (now the P4000), lurched sideways and bought Ibrix (X9000), and thought that they'd fixed the storage problem that was the EVA.
Wrong choice. The P4000 is small beer, a sort of one-shot technology that was OK when they bought it, but it's not going anywhere, doesn't scale well and isn't that popular as it's iSCSI only (the EVA is FC). The X9000 NAS boxen were never going to set the world alight; Ibrix and their dismal sales record proved that already. That's what you get for buying companies on the cheap.
So HP dig deep into their trouser pockets and buy 3Par, and discover the fit isn't that splendid; the EVA's natural market was at the channel low-end where bundles of storage, servers and networking make sense. 3Par doesn't fit in that sales model; it's a different beast than the EVA.
Data growth & management of the increasing amounts people want to store has illuminated the EVA in the worst possible way; it sucks as a platform for tomorrow, and it's barely adequate for today. 3Par might be the route forward for today, but it's asking a lot of EVA customers to do a rip-and-replace to make the switch, especially as they've already invested and need a return on that money. (Interestingly, 3Par is equally crippled for the long term; no dedupe, no compression, not unified storage, and unlikely to get these essentials any time soon. Thin provisioning is the big claim to fame, along with performance in a straight line. That's it.)
No HP storage visualisation appliance to ease that pain either, since the SVSP has been unceremoniously dumped.
Rock and hard place. Channel partners must be wondering what HP will do next, since they're facing some seriously stiff competition from particularly NetApp, with whom they're equal 2nd according to IDC.
Disclaimer; NetApp employee, and delighted to be so.
As did I...
because the EMC spin on this one is a big issue. Everyone makes mistakes, but does everyone try and rewrite history while they're at it?
- Xmas Round-up Ghosts of Christmas Past: Ten tech treats from yesteryear
- Analysis Microsoft's licence riddles give Linux and pals a free ride to virtual domination
- Review Hey Linux newbie: If you've never had a taste, try perfect Petra ... mmm, smells like Mint 16
- I KNOW how to SAVE Microsoft. Give Windows 8 away for FREE – analyst
- Special Report How Britain could have invented the iPhone: And how the Quangocracy cocked it up