back to article Supermicro crams 36 Samsung 'ruler' SSDs into dense superserver

Supermicro has a supernaturally dense thin server with up to half a petabyte of flash using unannounced Samsung SSDs. The SSG-1029P-NMR36L is your average unremarkable 1U rackmount job in the company's SuperServer range, except that it has up to 36 front-load, hot-swap, NVMe Samsung PM983 SSDs. It is a dual-socket server with …

  1. Frederic Bloggs
    Coat

    Breaking News

    Production units may run ~30% slower than designed....

    1. Korev Silver badge
      Coat

      Re: Breaking News

      I expect the customers will do into meltdown....

    2. Ian Michael Gumby Silver badge
      Thumb Up

      @Freddy bogs Re: Breaking News

      Considering that you don't know which 2 CPUs you will put in this unit, you can't say that.

      They have 'reasonably' priced 18 core chips which have 36 thread for a total of 72 threads to work with.

      Even if you lose 30% performance in the CPU... you don't necessarily lose 30% of your performance overall. It depends on what you're doing.

      The interesting thing ... what's the cost? Sure the drives aren't on the market yet, but you build out a rack of these guys... (That would be 38-40 depending on heat/cooling/power and ToR switch capacity... )

      Now you've got a lot of fast storage. Of course in real life, you'd mix and match these across a row of racks and fill out the rest with other gear ... Very nice...

  2. Anonymous Coward
    Anonymous Coward

    Use Case?

    Why would I need half a petabyte of SSD storage dedicated to a single server?

    1. Korev Silver badge
      Boffin

      Re: Use Case?

      A server for holding metadata for a storage system like Lustre (in reality you’d use more than one for redundancy

      1. Bronek Kozicki Silver badge

        Re: Use Case?

        I was thinking about it. For Lustre you would normally rely on a failover cluster of two MDS with single shared high-performance disk system (SAN e.g. fibrechannel) used for MDT i.e. actual data storage. There is no space for such arrangement if you have one server with all disks inside directly attached to PCIe buses of the CPUs. Unless the servers in the cluster were virtual, running inside that one machine - but that it is not much of added resiliency, is it?

        Another possible scenario is ZFS volume, shared as iSCSI target for a cluster (i.e. two more machines for MDS, without such outrageous storage). However then you lose large part of the potential performance gains from NVMe and flash, so perhaps not so good either.

        On top of that, it would have to be a very, very large filesystem which would need 200TB of MDT (i.e. metadata only). Still, I would be very happy to play with such a storage, for an experimental Lustre setup just to see how fast it is :)

        1. TechnicalBen Silver badge

          Re: Use Case? Simplicity?

          I have zero understanding of the current server or "cloud" storage systems in use. But would this allow some things that are in multiples due to size/space/connections to be reduced into smaller setups?

          For redundancy, you still only need 3-5 or so of a system at most. Diminishing returns happen at any scale over that (but of cause can go up to striping 7 or 9, or redundant failsafes etc).

          So would this kind of system allow those with massive cloud/streaming/etc services to reduce their server room of over X machines down to under X machines?

          Less power and less maintenance (no moving parts, smaller setup, easier cooling?) is still an overhead that the likes of many would wish to reduce! But no idea if this setup provides any of those benefits? (I also see that it's mainly meta data storage... but again, would this allow some services to combine their possibly bloated system designs?)

          1. rmason Silver badge

            Re: Use Case? Simplicity?

            THIS THIS THIS.

            We don't have mental amounts of storage, but it's pretty high for a smal-ishl company (150 employees)

            We have two SANS running as storage for various ESXI hosts. One is circa 24tb and the other circa 30tb both with SSD cache.

            That's 8U in our rack gone. in our server room that's not really big enough for a Dev firm.

            When they drop from the initial (no doubt mental) price, these will be absolutely incredible for us.

            Would work exactly as the current ones do. Drop 10g nics in, connect to our storage network, bam, everything can access/use them as we wish. We are absolutely at capacity in terms of rack space. This would free me half a rack.

            1. Justin Clift

              Re: Use Case? Simplicity?

              > Would work exactly as the current ones do. Drop 10g nics in, connect to our storage network, ...

              You'd probably want faster than 10GbE for these. :)

    2. Anonymous Coward
      Anonymous Coward

      Re: Use Case?

      I guess you would not need 'half a petabyte of SSD storage dedicated to a single server', along with probably about 7.6 billion other people. The difference between you and them is that they have not singled themselves out to tell the world about their lack of requirement for 'half a petabyte of SSD storage dedicated to a single server'.

      However there are some people who do want 'half a petabyte of SSD storage dedicated to a single server'.

    3. KSM-AZ

      Re: Use Case?

      Why would *anyone* need more than 640mb of memory in a PC? Just silly if you ask me.

    4. Tom Samplonius

      Re: Use Case?

      "Why would I need half a petabyte of SSD storage dedicated to a single server?"

      Obviously you aren't a hyperscaler. You know, those guys who now buy most of the computer components manufactured globally?

    5. Ian Michael Gumby Silver badge
      Boffin

      @AC ... Re: Use Case?

      Oh lots of good reasons.

      Consider that today's Enterprises have a mixture of fast and slow data that you will need fast access to use if you want to do subjective real time analytics.

      There's a ton of applications that could take advantage of this model. At a minimum... you would need three of them as part of a hadoop cluster to handle fast ingest, or fast lookups from HBase / MapR-DB or even as part of an Elastic Search index. (In memory with fast application level swap to disk. )

      If you want to get into specialized apps. Try dealing with active maps where you need to take in map attribute data, recompile the maps and push out. (e.g. variable speed limit roads, accidents, and traffic)

    6. StargateSg7 Bronze badge

      Re: Use Case?

      People who work with 4K and 8k video can EASILY get into the multi-Petabyte+ territory

      so even this system won't be enough to satisfy the storage-space monsters!

      On one of render our servers, it has TWO PETABYTES attached just for its own use.

      Add our offline Digital Tape-based storage, and we are now at over TEN EXABYTES!

      YES! That is over 10 000 PETABYTES or 10 000 000 Terabytes!

      Uncompressed 4K video and 8K video is multi-gigabytes per second

      so it is not uncommon to NEED so much storage space that the storage

      system costs start becoming INSANE!

      This type of Samsung system is IDEAL for online/local use in 4k/8k video editing

      and rendering systems. I can tell you however, that even this Samsung system

      will be TOO SMALL for larger video jobs ESPECIALLY if you are editing and

      rendering UNCOMPRESSED 4K and 8K video streams needed to keep the

      premium image quality of high end video.

  3. DougS Silver badge

    Hate to think about what this will cost

    AFAIK Samsung hasn't even released pricing on those SSDs...

    1. Anonymous Coward
      Anonymous Coward

      Re: Hate to think about what this will cost

      Any company that buys them won't even ask how much, just how fast.

    2. Tom Samplonius

      Re: Hate to think about what this will cost

      "AFAIK Samsung hasn't even released pricing on those SSDs..."

      With Microsoft, Amazon, Google and Apple buying most of the computer component production these days for their respective clouds, pricing for distribution to the peasants may never be done at all.

    3. Sandtitz Silver badge

      Re: Hate to think about what this will cost

      An almost identical ADATA box is claimed to cost $250K. Probably from the same factory this Supermicro comes from...

    4. Ian Michael Gumby Silver badge
      Boffin

      @DougS Re: Hate to think about what this will cost

      If you have to ask, you probably couldn't afford it. ;-)

      Yeah, I would want 3-5x of these boxes. I would imagine you could buy a small house for the price of these servers if maxed out. Probably 180K or more just on the drives alone.

  4. Ian Joyner

    Unannounced?

    Beware of unannouncements.

  5. defiler Silver badge

    How much for 3?

    ...because I'd be embarrassed about asking for only one, and I can divide.

  6. Hans 1 Silver badge
    Windows

    Give us 3.5" SSD's with mega capacity already, pretty sure you could cram 20Tb into one of those ... nail the coffin on spinning rust ....

    1. Oneman2Many

      problem with 3.5" factor is you are limiting the number of interfaces and thus overall transfer rates.

      1. Bronek Kozicki Silver badge

        Another question, how much bandwidth do you want for this 20TB of data? With small factor storage directly attached to PCIe bus (M.2 discussed here, or older brother 2.5" U.2 NVMe) you have some balance between capacity and bandwidth. On the other hand, single SAS connector is not really that much, and there is no form factor with directly attached 3.5" PCIe bus.

    2. Ian Michael Gumby Silver badge
      Boffin

      @Hans 1

      You do realize that with the smaller form factor... 36 of these drives will deliver Petabyte scale.

      Spinning rust is the new tape. It will take 10-15 years before you start to see it disappear and even then it will still exist.

  7. Lord_Beavis
    Paris Hilton

    And the serious question again is...

    how much 4K pr0n will it hold?

    1. Ian Michael Gumby Silver badge
      Alien

      @Lord_Beavis... Re: And the serious question again is...

      You're behind the curve.

      We're talking 8K pr0n and even then... lots of it. The only limitation would be your ability to manage all of those streams.... (Assuming you want to serve up the pr0n for $$$

      If this was for your own personal use... I hope that you're on the NHS or have a really good health insurance policy. (CTS, laser removal for the hair on your palms, vision checked and glasses , plus a seeing eye dog...)

  8. doubled1

    I/O

    Interesting box but I did not see what kind of I/O this machine will support to the network, and without dual servers in the box there is no way to implement redundancy at the OS level i.e if your os panics or blue screens it would seem you would lose access to all of the storage. it looks like its targeted as a box you can run as a backup target or as part of a larger distributed system possibly as a node in a Hyperconverged solution. Just a few thoughts.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019