back to article DSSD says Violin's right: SSD format is WRONG for flash memory

The SSD format is wrong for flash memory storage arrays. That is the message from DSSD, EMC’s rack-scale, shared flash array development. The Register has seen pictures of DSSD’s flash modules and these are not disk bay form factor SSDs. Rather, they resemble Violin Memory VIMM – Violin In-Line Memory Modules – instead. Pure …

  1. Crazy Operations Guy

    PCIe storage makes me nervous

    Given that there is no controller between the servers and the storage in this architecture, I get very nervous. You now have a very, very high-speed bus connecting all of your systems with absolutely no security involved. A single server compromised means that they all are. Plus given the way PCIe is built, any single compromise means that now that compromised server has access to all of RAM for every other machine connected (This is why I hate external interfaces with DMA).

    This platform is only as secure as the weakest system connected to it, assuming you could trust those machines in the first place.

    Security concerns aside, shaving milliseconds off of storage access is pointless in nearly every case. The biggest slow-down in applications is inefficient code, improperly used cache, and terrible architecture. I've seen so many installations that were slow because long ago someone decided to do multi-tenancy on a database server and rather than split things off, they just kept throwing hardware at the problem where eventually all of your web applications are hitting the same database server cluster, where each node now has 4x 16-core/32-thread processors, 1-2 TB of RAM and disks like the ones mentioned in the article to run hundreds of unrelated web applications when it would have been so much more cost effective to run 1-2 VMs per application to host each database.

  2. rainjay

    Re: PCIe storage makes me nervous

    I thought they mentioned having multiple controller cards per array, instead of having controllers on each flash card. How would this compare to an in-memory database system? Speed looks a bit slower but data integrity is a lot better. As for security, are there OSes that run security checks on any PCIe hardware or is anything on that bus considered trusted?

  3. Anonymous Coward
    Anonymous Coward

    Re: PCIe storage makes me nervous

    I agreed with everything you said, until you ruined it with this:

    "Security concerns aside, shaving milliseconds off of storage access is pointless in nearly every case."

  4. jason 7

    Re: PCIe storage makes me nervous

    Well compared to a major security snafu, it kind of is. Security is now the no.1 issue but I think some tech guys still concentrate on performance as priority no.1, 2, 3 and 4

    No one has really suffered via a "storage system could be a millisecond faster" article in the tech press.

    I would prefer to know my system was as watertight as it could be over eking out another 0.5% performance. Yeah sure that 0.5% could mean a lot of money but having your customers data spaffed all over the web will cost far more.

  5. Calleb III

    Re: PCIe storage makes me nervous

    Agree with the security concerns, but you are way off on the second part. There plenty of instances where milliseconds count, and more often than not buying better hardware is cheaper than re-writing the code. It's like having an issues with the engine of your car, the mechanic quotes you £50k for repairs, when you can buy a new car for £40k, which will you choice?

    As for the shared Database servers - there is nothing wrong with it, if done right and the major concern here is the cost of the DB engine license, where in most cases there is significant saving in running multiple instances on the same iron. Besides your solution of throwing VMs (and underlying hardware) at the problem instead of just hardware is not ideal. There are uses for both shared and dedicated instances, no cookie cutter solution.

  6. Grikath Silver badge

    Re: PCIe storage makes me nervous

    "This platform is only as secure as the weakest system connected to it, assuming you could trust those machines in the first place."

    As is any platform, or setup. Especially since the biggest security risks are users, followed by bugs/"features"/etc. in the software you're running on the servers.

    Love the paranoia, but with that attitude you can't trust anything or anyone, and won't ever get anything... y'know... done.

  7. Lusty Silver badge

    Re: PCIe storage makes me nervous

    "It's like having an issues with the engine of your car, the mechanic quotes you £50k for repairs, when you can buy a new car for £40k, which will you choice"

    But to stretch the analogy, the new car is the same Vauxhall Nova model you had before, from the same year, and is still rusty and crap but some wideboy has fitted it with NOS so it goes faster than the old one.

  8. Anonymous Coward
    Thumb Down

    Yes out proprietary format is best....

    ....for us.

  9. Richard D

    SanDisk have also gone down this similar path with their Infini Flash product

  10. Axel Koester

    Finally seeing the light!

    One by one, Enterprise Flash vendors are finally rejecting SSDs for their next generation All-Flash-Array design. Good idea, who needs disk emulation in a disk-less device?

    Disclaimer: I'm working for IBM as a storage technologist, and I was already admiring Texas Memory Systems RAMSAN before it became IBM. Today's IBM FlashSystems have been designed this way since 2012. Nice reverence!

    Here's my top 5 reasons why SSDs are not ideal :

    1. As a developer, you don't want third-party firmware to steal time cycles and hide vital information from the chip-level, or do other unexpected things. You'll revert to SSDs only if you don't have the time to develop good endurance-enhancing algorithms of your own.

    2. Wear-levelling algorithms work better the more chips they cover. Custom modules carry many chips, and the algorithms can cross module boundaries. In contrast, space inside an SSD is very limited and so is the number of NAND Flash pages for local endurance optimization. They will wear out quicker than necessary.

    3. Another pain is the RAID controller required to protect against spontaneous failures: You don't gain stability or lower latency by adding internal interfaces, especially with third-party elements. Without SSDs, one can *combine* endurance optimization with hot swap module protection in RAID5 style, a very efficient code stack which fits in lightweight FPGA gate logic.

    4. SSDs would not only wear out quicker, they will always be phased out before their reserve capacity pool is depleted. That is because disk slots don't support variable capacity by definition. Basically you're throwing away 101% healthy chip capacity. Custom chip modules don't have that restriction, they can "help each other out". This results in a significantly longer mixed lifetime.

    5. Enterprise storage should always strive to become more reliable, faster, cheaper, and lighter on power consumption: We achieve this by removing any piece of hardware or code in the data path that can be consolidated in something smaller. As a designer, once you're talking to the raw chips, you also gain the freedom to select the chip manufacturer of *your* choice, not the one that the [commodity] SSD manufacturer currently finds to be the cheapest.

    I'm also reading:

    "The biggest slow-down in applications is inefficient code, improperly used cache, and terrible architecture".

    Agreed. But we also find that the cheapest countermeasure (and sometimes the only one) is to throw faster hardware at the problem: 99 out of 100 FlashSystem clients are seeking the best latency as a quick solution for slowing applications, while less than 1% is already fully utilizing the max throughput.

    Be careful, we're talking about 99 µs and 10 GB/s for a single 2U FlashSystem 900 here. And there's up to 8 such 57 TB drawers in a clustered FlashSystem V9000 configuration. The latter would also support Real-Time Compression modules, packing up to five times more Oracle DB data in the same space... equals 2.2 PB high performance databases in a single rack.

    We're expecting that number to double with the new chip generation becoming broadly available. Rest in peace, disk arrays.

  11. Anonymous Coward
    Anonymous Coward

    Oh dear

    You mean you go to all that trouble to optimize the flash module and then go and stick traditional FC switching and SVC in front of it :-)

  12. tom 99

    Re: Oh dear

    Exactly. And once you want a snapshot / flash copy, you still need to do copy-on-fist-write through back-end FC, transferring blocks back and forth.... all those flash optimizations are wasted with such architecture. Not to mention lack of dedupe.

  13. Axel Koester

    Re: Oh dear

    Invalid argument, most likely influenced by 'FUD'... IBM FlashSystem response time including SVC is still a very quick 200 µs, plus Real-time Compression for Oracle, still 200 µs. With SVC, 5:1 Real-time Compression and active Snapshots, it's... 200 µs. Talking about transferring FC blocks back and forth. When the non-SSD back-end is real fast, you can do a lot with less.

    How much will I get from an Xtremio solution, at equal TCO? (3x slower response time, 0.7 times the capacity? Therefore database workloads are not recommended, only VDI where dedup has a chance and higher latency is not harmful? SAP benchmarks, anyone?)

    Note that the FlashSystem V9000 does not need a separate SVC anymore (only the predecessor did). There's also a FlashSystem 900 CAPI adapter option that feeds data right into the CPU cache line - C in CAPI stands for 'Coherent'. The same principle as NVMe, but CPU synchronous, kind of "Nearline RAM". Great for big key-value stores... google "IBM Data Engine for NoSQL".

    Cheers!

  14. Anonymous Coward
    Anonymous Coward

    Re: Oh dear

    Sorry but just because you now hide the SVC behind a bezel doesn't mean it isn't there and yes it is still a separate box requiring a multistage update for firmware to cover SVC & RAMSAN.

  15. Trevor_Pott Gold badge

    Proprietary flash modules

    Just think about how much over market price they can now charge for storage! It'll make the disk array days looks tame.

  16. Dave Nicholson - EMC

    Re: Proprietary flash modules

    What do they teach in Economics Class in Canada? French?

    Either DSSD offers greater value than a bunch of NAND-in-a-CAN, or it doesn't. The market will determine this and pay accordingly.

  17. Trevor_Pott Gold badge

    Re: Proprietary flash modules

    Bullshit fantasy drivel.

    Proprietary players will subsidize loss leaders to drive competitors out of the market. Once they have a hold on the juiciest segment of the market they feel they are likely to get, they'll turn the knobs and squeeze. Lock-in will mean that customers can't go anywhere and proprietary components (likely combined with the storage equivalent of HP chipping their ink cartridges) will mean that the costs per GB of proprietary flash will be astronomical compared to standardized flash.

    Which is the exactly same shit that those very same storage vendors pulled with spinning disks. Which lead to the current storage wars and the explosion of startups offering new ways to do storage and eating into the margins (and market share) of the spinning-rust titans.

    Of course, because the dominant players have already been through this before, they will be a lot more proactive about killing off potential competitors than they were in the past. (This is already beginning to be seen.) The margins on disk-based arrays have plummeted, but there is no way that the big fat storage daddies are going to let this happen to flash.

    Nosiree.

    That blinkered, Americanized - almost Randian - view of economics which relies on faith and carefully ignoring abuse of market dominance is a lie. As big a lie as "trickle down economics", which is another in the pack of scurrilous economics bullshit fed to the mentally incompetent to keep them pliant.

    No. Proprietary flash modules give proven market dominance abusers a means to abuse their dominant position in the market. And the instant that they've managed to leverage their dominant position in the disk array market to achieve a dominant position in the flash array market, they'll start to squeeze.

    That's the Oracle school of economics, and it's fucked right the hell up. It's also the only playbook that tech megacorporations work from.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2018