back to article SPEC SFS 2014 benchmark smashed by storage newbie

NVMe-over-Fabrics fanboy startup E8 has whupped other suppliers' behinds with a SPEC SFS2014 filer benchmark. It is the software build section of the benchmark, which measures the number of software builds and the overall response time. The previous record holder was NetApp, with 520 builds and a 1.04 second overall response …

  1. jonathan keith

    Eh?

    I wish I knew what any of that meant.

    1. Julie Herd

      Re: Eh?

      Disclosure: I'm the Director of Technical Marketing at E8 Storage

      The short answer: We enable IBM Spectrum Scale clusters to do more, faster, with far less hardware $$.

      Which means that we can accelerate file based workloads like real-time 4K video editing, genomics, etc. in addition to the real-time data analytics and low latency transactions we've been tackling since we launched our products in 2016.

    2. FuzzyWuzzys
      Happy

      Re: Eh?

      Short version....

      "We took a shitload of superfast NVMe solid state drives, racked 'em up onto a super fast backplane and as expected they run like shit of the proverbial ( and no doubt cost a king's ransom too! ). If other vendors think their old hat idea of hanging SSDs off traditional slow SATA interfaces is still worth it, they're in for a surprise."

      1. Roo
        Windows

        Re: Eh?

        Looks handy to me.

        I dreamt of this kind of throughput when waiting for a compile to complete off a Fujitsu Eagle back in the day (shared with 30 other people). Kinda fun to see it happen even if it's not quite the way I predicted... The TaihuLight boxes are hooked up with PCI-Express 3.0, so presumably they have a way to integrate NVMe drives directly into their fabric. Could be a fun OCCAM platform. :)

        The PCI-Express 3.0 fabrics remind me of the some of the ideas floated for IEEE1355 back in the day, but much quicker and ubiquitous. It's fun to see (some) things get a lot better despite everything else falling apart. :)

      2. CheesyTheClown

        Re: Eh?

        I actually switched recently to USB2.0 from NVMe and FC. It ends up, NVMe was horrifyingly slow with maximum throughput of 1.98GB/sec on a PCIex2 interconnect. Also, dedup became a massive bottleneck as the number of devices increased.

        This is ok for thinks like 8k video editing and it’s lovely for low latency. I would highly recommend a system like this for movie production when people are working in 8k 4:4:4 uncompressed. But no one would ever be stupid enough to edit in 8K uncompressed. You’d want to use a lossless codec with scalable and possibly temporal compression such as H.265 or J2K encapsulated in MXF. This means that the machine you’re editing on can make uuid requests for frames at a given resolution and color depth that can actually be edited in real time without hardware DSPs. Then, the project can be sent to render and real-time no longer matters.

        You would never ever ever want to use a file server as a frame server as it doesn’t understand video. You want a frame server that actually speaks video. It saves MILLIONS!!!

        In any case, when managing business data, images, etc, using map/reduce with Banana Pi and 256GB commodity SSD drives, I have cut my transaction processing times from using massive storage systems like this by a lot. The reason is that I now have massively scalable storage. All my requests are intelligently routed and hot data is processed from in-memory. With 2GB per node and 128 nodes, that means about 96GB of hot data. The hottest data is replicated across all node and all locations. For all other data, there’s a minimum of 3 copies maintained at all times on least two sites. We can also process stripes for large blobs and we have several 12TB spinning disk nodes per site.

        So, we have performance that would make this system weep at how slow they are. But if you need high performance, unstructured, random access data this system is almost certainly king. It’s great for legacy technologies like virtual machines and raw video editing. It can also be useful for storage of scientific data sets for supercomputers. A great example would be for storing data sets from the LHC.

        If you’re a business running anything other than high frequency trading systems, if a storage system like this sounds attractive, you’re not managing your data but instead just over provisioning in hopes that if you spend enough millions, you can avoid actually managing your data.

      3. StargateSg7

        Re: Eh?

        We just goto Samsung and buy the NAND chips ourselves and stuff them onto custom motherboards we print ourselves that have BGA (Ball Grid Array) chip inserts built-in kinda like what they did in the old days (1980's) which had individual plug-in ram sockets. If a NAND chip goes bad we can EASILY replace it just by plugging in a new one!

        Using the cheaper 64 gigabyte chips, we put 1024 chips per board and a custom fibre optic dense-wave multiplexing communications chip on it for 64+ Terabytes of high speed drive space for our 4k/8k/16k resolution video-oriented GPU-based supercomputing applications. We dip 16 such boards in a glass-lined rackbox filled with dialectric circulating cooling fluid for the ultimate in NAND chip service life-extending protection! At 16 boards at 64 terabytes each, that's over a petabyte per rack (and we have 16 of them!) of SUPERFAST aggregated speed of over a Terabyte Per Second per board or 16 Terabytes per second per rack AND a parallel group throughput of 256 Terabytes per second when the 16 racks are RAIDED together.

        Of course it gets kinda STUPID EXPENSIVE ....BUT.... the performance is definitely there!

        and by using the cooling fluid we have VASTLY extended the service life of the NAND chip

        writes and reads before the cells begin deteriorating which means we can use the cheaper

        NAND chips because it SEEMS the heat of constant reads/writes DOES in fact impact

        overall SSD FLASH drive memory cell wear issue and by COOLING the chips you nearly

        double or triple the wear life of the cheaper chips into enterprise-class SSD drive territory!

        That saves us a BIG BUNDLE of money by being able to use the cheaper chips!

  2. Anonymous Coward
    Anonymous Coward

    "If we chart the results ordered by ORT then we get this:"

    Looks to me like it is ordered by builds?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like