Production units may run ~30% slower than designed....
Supermicro has a supernaturally dense thin server with up to half a petabyte of flash using unannounced Samsung SSDs. The SSG-1029P-NMR36L is your average unremarkable 1U rackmount job in the company's SuperServer range, except that it has up to 36 front-load, hot-swap, NVMe Samsung PM983 SSDs. It is a dual-socket server with …
Considering that you don't know which 2 CPUs you will put in this unit, you can't say that.
They have 'reasonably' priced 18 core chips which have 36 thread for a total of 72 threads to work with.
Even if you lose 30% performance in the CPU... you don't necessarily lose 30% of your performance overall. It depends on what you're doing.
The interesting thing ... what's the cost? Sure the drives aren't on the market yet, but you build out a rack of these guys... (That would be 38-40 depending on heat/cooling/power and ToR switch capacity... )
Now you've got a lot of fast storage. Of course in real life, you'd mix and match these across a row of racks and fill out the rest with other gear ... Very nice...
I was thinking about it. For Lustre you would normally rely on a failover cluster of two MDS with single shared high-performance disk system (SAN e.g. fibrechannel) used for MDT i.e. actual data storage. There is no space for such arrangement if you have one server with all disks inside directly attached to PCIe buses of the CPUs. Unless the servers in the cluster were virtual, running inside that one machine - but that it is not much of added resiliency, is it?
Another possible scenario is ZFS volume, shared as iSCSI target for a cluster (i.e. two more machines for MDS, without such outrageous storage). However then you lose large part of the potential performance gains from NVMe and flash, so perhaps not so good either.
On top of that, it would have to be a very, very large filesystem which would need 200TB of MDT (i.e. metadata only). Still, I would be very happy to play with such a storage, for an experimental Lustre setup just to see how fast it is :)
I have zero understanding of the current server or "cloud" storage systems in use. But would this allow some things that are in multiples due to size/space/connections to be reduced into smaller setups?
For redundancy, you still only need 3-5 or so of a system at most. Diminishing returns happen at any scale over that (but of cause can go up to striping 7 or 9, or redundant failsafes etc).
So would this kind of system allow those with massive cloud/streaming/etc services to reduce their server room of over X machines down to under X machines?
Less power and less maintenance (no moving parts, smaller setup, easier cooling?) is still an overhead that the likes of many would wish to reduce! But no idea if this setup provides any of those benefits? (I also see that it's mainly meta data storage... but again, would this allow some services to combine their possibly bloated system designs?)
THIS THIS THIS.
We don't have mental amounts of storage, but it's pretty high for a smal-ishl company (150 employees)
We have two SANS running as storage for various ESXI hosts. One is circa 24tb and the other circa 30tb both with SSD cache.
That's 8U in our rack gone. in our server room that's not really big enough for a Dev firm.
When they drop from the initial (no doubt mental) price, these will be absolutely incredible for us.
Would work exactly as the current ones do. Drop 10g nics in, connect to our storage network, bam, everything can access/use them as we wish. We are absolutely at capacity in terms of rack space. This would free me half a rack.
I guess you would not need 'half a petabyte of SSD storage dedicated to a single server', along with probably about 7.6 billion other people. The difference between you and them is that they have not singled themselves out to tell the world about their lack of requirement for 'half a petabyte of SSD storage dedicated to a single server'.
However there are some people who do want 'half a petabyte of SSD storage dedicated to a single server'.
Oh lots of good reasons.
Consider that today's Enterprises have a mixture of fast and slow data that you will need fast access to use if you want to do subjective real time analytics.
There's a ton of applications that could take advantage of this model. At a minimum... you would need three of them as part of a hadoop cluster to handle fast ingest, or fast lookups from HBase / MapR-DB or even as part of an Elastic Search index. (In memory with fast application level swap to disk. )
If you want to get into specialized apps. Try dealing with active maps where you need to take in map attribute data, recompile the maps and push out. (e.g. variable speed limit roads, accidents, and traffic)
People who work with 4K and 8k video can EASILY get into the multi-Petabyte+ territory
so even this system won't be enough to satisfy the storage-space monsters!
On one of render our servers, it has TWO PETABYTES attached just for its own use.
Add our offline Digital Tape-based storage, and we are now at over TEN EXABYTES!
YES! That is over 10 000 PETABYTES or 10 000 000 Terabytes!
Uncompressed 4K video and 8K video is multi-gigabytes per second
so it is not uncommon to NEED so much storage space that the storage
system costs start becoming INSANE!
This type of Samsung system is IDEAL for online/local use in 4k/8k video editing
and rendering systems. I can tell you however, that even this Samsung system
will be TOO SMALL for larger video jobs ESPECIALLY if you are editing and
rendering UNCOMPRESSED 4K and 8K video streams needed to keep the
premium image quality of high end video.
"AFAIK Samsung hasn't even released pricing on those SSDs..."
With Microsoft, Amazon, Google and Apple buying most of the computer component production these days for their respective clouds, pricing for distribution to the peasants may never be done at all.
Another question, how much bandwidth do you want for this 20TB of data? With small factor storage directly attached to PCIe bus (M.2 discussed here, or older brother 2.5" U.2 NVMe) you have some balance between capacity and bandwidth. On the other hand, single SAS connector is not really that much, and there is no form factor with directly attached 3.5" PCIe bus.
You're behind the curve.
We're talking 8K pr0n and even then... lots of it. The only limitation would be your ability to manage all of those streams.... (Assuming you want to serve up the pr0n for $$$
If this was for your own personal use... I hope that you're on the NHS or have a really good health insurance policy. (CTS, laser removal for the hair on your palms, vision checked and glasses , plus a seeing eye dog...)
Interesting box but I did not see what kind of I/O this machine will support to the network, and without dual servers in the box there is no way to implement redundancy at the OS level i.e if your os panics or blue screens it would seem you would lose access to all of the storage. it looks like its targeted as a box you can run as a backup target or as part of a larger distributed system possibly as a node in a Hyperconverged solution. Just a few thoughts.
Biting the hand that feeds IT © 1998–2019