back to article Enterprise storage sitrep: The external array party is over

Despite the storage market growing 13.7 per cent annually in the fourth 2017 quarter, the external array section grew less than 2 per cent and is forecast to decline, according to IDC's worldwide enterprise storage systems tracker. NetApp's position is also weakening as HPE overtakes it in external storage revenues and IBM …

  1. CheesyTheClown

    Centralized all flash is just an impressively bad idea.

    Ok... so, you have SSD media capable of 2GB/sec read and write. Assuming almost zero overhead, that means to transport that 2GB/sec would require 20Gb/sec (as NRZ coding and clock recovery will always consume the difference). This means that over a single 100Gb/sec network connection, you can achieve maximum performance of only 5 storage devices.

    Ok, so you're using NVMe over a PCIe switched fabric. Even if this were the case, you're probably still maxing out at 20 storage devices. We're not even considering processing overhead. So, let's assume you can read and write an aggregate of 40GB per second across your array. You would need a storage device able to handle compression, deduplication, error correction, possibly erasure coding etc... for 40GB per second. Can it be done... in ASICs sure!!! no worries! Of course ASICs can't be upgraded so unless you're 10000% sure that the array manufacturer uses nothing be absolutely perfect engineers who never make mistakes, you'll need something probably software based.

    No... Cisco, HP, Huawei, EMC, NetApp... none of these guys can deliver anything that can possibly make good use of all flash anything. Not only that, but even using awesome tech like XPoint, the latencies required to handle processing centralized storage would be far too long to waste time on this project.

    Buy local consumer grade M.2 storage (or XPoint if you're truly wasteful) and put four of them in each machine. Then add some spinning disk for capacity to each server. Then run a proper share-nothing storage system. If you're really bad at your job, you can use systems that manage virtual disks and block based storage. This is how "Storage Experts" do it.

    Or if you're really good at your job, build your storage infrastructure around your systems and you can actually make do with far less overhead.

    Quit throwing away all your money on things like storage arrays and start designing your storage properly. This means databases, object storage, etc... it's 2018. Any project you start now that you put even the slightest effort into will run until 2020 at least. Might as well do it right.... or you throw good money after bad and be "a storage and VM expert"

    1. MonkeysOnTheCar

      Re: Centralized all flash is just an impressively bad idea.

      Wow - I bet you get invited to parties.

      1. ManMountain1

        Re: Centralized all flash is just an impressively bad idea.

        I was literally about the say the same thing!!

    2. Nate Amsden

      Re: Centralized all flash is just an impressively bad idea.

      That's making a huge assumption - that many apps really need such high levels of throughput. SSDs historically have been about the IOPS, not about the data transfer rate. For sequential workloads for quite some time even 7200 RPM disks were fine for that kind of stuff.

      And even though SSDs are about IOPS, put a dozen or more in a chassis and you have a ton of IOPS capacity.

      NVMe is a good example here - for many workloads it simply won't have a noticeable impact to performance(over SSDs).

      As you say that is how storage experts do it, for most everyone else they have to buy that expertise in the form of pre built solutions whether it is traditional enterprise storage and/or storage from a service provider.

      It's very dangerous for someone to go out and try to design their own storage system, storage is very complex, both hardware and software. Worse than that it holds state. Data problems can be easily automatically replicated to other systems before anyone has time to react. Applications and operating systems are generally very unforgiving to variable storage availability and latency.

    3. Anonymous Coward
      Anonymous Coward

      Re: Centralized all flash is just an impressively bad idea.

      My 300 VMs all chug along at about 20,000 IOPS, 700MB/s - they require general purpose storage. All that waffle you just spouted means nothing to me.

      1. dikrek

        Re: Centralized all flash is just an impressively bad idea.

        Hi all, Dimitris from HPE Nimble (http://recoverymonkey.org).

        Cheesy often responds like that in multiple posts, it's interesting to see his past posting history.

        Based on the commentary I suspect he either works for or resells Microsoft. Which is perfectly fine, just be up front with it.

        Anyway, is it possible to do grid-based storage go fast? Sure. Fast servers, fast drives in those servers, huge RDMA links between those servers, and it will be fast. Maybe not space-efficient but definitely fast.

        Speed isn't difficult to achieve.

        Reliable storage, with useful enterprise features plus speed, now that's the hard part.

        Then there's troubleshooting. How easy is to troubleshoot the cobbled together (yet fast) grid system?

        Especially if something really weird is going on.

        How easy is it to get automated firmware updates for all the components the entire grid solution?

        Do those updates take into account the rest of the environment that it's connected to and how the system is configured and used?

        I could go on.

        Doing storage right is very hard.

        I've seen weird (and dangerous) bugs in drive firmware that, unless one is deeply immersed in storage and have all the right tooling and automation, would be next to impossible to figure out for someone selling SDS.

        D out.

        1. Anonymous Coward
          Anonymous Coward

          Re: Centralized all flash is just an impressively bad idea.

          "I could go on"

          Please don't.

  2. PaulHavs

    AFA just External - bit moderised

    Paul Havs from HPE herer....

    Ummmm, i fail to see why the industry, including el'Reg, doesn't just recognise that the AFA industry is just an evolution of the (yes traditional) external array industry.

    For sure, the external array industry is flat to small growth; however virtually ALL of that fleet will be replaced with AFA over next 5 years to be sure - thats a fantastic opportunity for all - customers and vendors.

    The reason for this is due to structured data growth being minimal; and unstructured being massive. External shared array storage is best suited for structured data - thats what it is designed for.

    1. ManMountain1

      Re: AFA just External - bit moderised

      Agreed. External storage is external storage - as a concept. Who cares if it's SATA, SAS, SSD, NVMe, whatever. Diesel, petrol or electric engine, a car is still a car.

  3. SeymourHolz

    "Surely this is the beginning of the end for NetApp..."

    You've been pitching that line for at least ten years now, Reg...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019