back to article FLASH better than DISK for archiving, say academics. Are they stark, raving mad?

An academic paper claims flash could be better than disk for archiving. So just how did this unlikely result come about? The SSRC* paper, An Economic Perspective of Disk vs. Flash Media in Archival Storage, was published earlier this year, at the 22nd IEEE International Symposium on Modelling, Analysis, and Simulation of …

  1. Steve Knox
    Paris Hilton


    Isn't the endurance issue with flash caused by degradation from multiple overwrites?

    If so, then a write once, read rarely, overwrite rarely* use pattern should be fine at all but the most sensitive endurance level.

    Or is there degradation over time (or reads) as well?

    *Which, for those who like to stretch acronyms to the limit, could be called Write Once, Read Rarely, overwrite rarelY. I'll just go and chastise myself for thinking that up, thank you very much.

    1. Peter Gathercole Silver badge

      Re: Endurance?

      Flash memory degrades over time due to the migration of electrons as a result of entropy. At he 2013 Flash Memory Summit, it was suggested by a Facebook representative that the "JEDEC JESD218A endurance specification states that if flash power off temperature is at 25 degrees C then retention is 101 weeks". Flash memory retains the data best if the controller is powered up once in a while to scan and correct any bit errors that creep in.

      I've always been dubious of flash memory retaining the data for any extended time, and I would be incredibly sceptical about any claim that says that current flash memory technologies could be used to reliably keep data for decades, even if "Flash drive controllers, currently mostly optimised for performance, can be optimised for endurance instead".

    2. fnj

      Re: Endurance?

      To answer the first question: no, and the second question: yes.

      Endurance measures the amount of data you can write and overwrite. Retention measures the calendar lifetime of data which has been written. An SSD which is unplugged and sitting on the shelf is a time bomb. It's due to leakage. It's strongly temperature dependent, but leakage will inexorably occur even at room temperature or below.

      It would be foolish to count on data retention for more than 1-3 years on an unplugged SSD. It doesn't have to be that poor, but there is a definite tradeoff between bit density and data retention. There are microcontrollers whose flash memory is absolutely guaranteed to retain data for 10-20 years. The use case there is very different. They are basically programmed once and then expected to operate without attention for a long time.

      Not that even 10-20 years is particularly impressive. Good quality enterprise tape can match or exceed that.

      In comparison, fresh disk drives which are written once or a few times and then unplugged will generally have their data perfectly intact and readable 5 or even 10 years later.

      So in summary, though "stark raving mad" is a bit harsh, I would say "highly ignorant" is appropriate.

      1. Anonymous Coward
        Anonymous Coward

        Re: Endurance?

        "In comparison, fresh disk drives which are written once or a few times and then unplugged will generally have their data perfectly intact and readable 5 or even 10 years later."

        I have had stored disk drives that were dead when they were needed again after a couple of years or so.

        Possibly the weak components are "wet" electrolytic capacitors which degrade if left dormant. Presumably that would be, or already is, addressed with improved versions?

        1. BlartVersenwaldIII

          Re: Endurance?

          A big problem with drives that are powered off for a long time is the bearings (either of the spindle or the actuator) sticking. It becomes far worse when a drive that's seen some heavy use is kept offline for a while (since then you're talking about an already slightly worn bearing) but it can happen with new drives too.

          Anecdotally, dusted off a 120GB OCZ Agility drive the other day that must have been powered off for at least 3 years. Data seemed to be 100% intact...

          1. Mark 65

            Re: Endurance?

            Surely if it is archive you want a sort of drive where the data is just burned in and read only preferably with no, or minimal, moving parts?

            1. John Tserkezis

              Re: Endurance?

              "Surely if it is archive you want a sort of drive where the data is just burned in and read only preferably with no, or minimal, moving parts?"

              It depends entirely on the technology you're using. Regular writeable CDs and DVDs won't last too long, unless you use the gold based chemical CDs (claimed to be 100 years).

              Once you get to flash or spinning rust drives, they're a lot more complex, and a lot more can go wrong.

              The current consesus, is tape will last the longest, even if it has some downsides (data access time can be a bit rude if you store off-site). But the article is based entirely on the fact they want to access that stored data quickly too.

              So their "best" suggestsion of Flash is suited for that useage case. If other factors are more important to you, such as raw data retetention life and cost, then as the saying goes, "your milage may vary".

        2. razorfishsl

          Re: Endurance?

          Yep… modern drives use DRY Tantium caps, there is a bigger threat from 'stiction' or rubber degradation on the drive seals.

        3. John Tserkezis

          Re: Endurance?

          "Possibly the weak components are "wet" electrolytic capacitors which degrade if left dormant. Presumably that would be, or already is, addressed with improved versions?"

          Yes, newer electrolytics do age better (well, a little better at least), but most if not all drives now use tantalum types, that would last longer than the media they're running anyway.

    3. razorfishsl

      Re: Endurance?

      I spent a year researching die level Nand flash for a dissertation, what I found out completely shocked me, so much so I never use SSD's for critical data storage or booting my os.

      ANY sort of access to the Nand Flash array causes degradation to the data on the device (randomly!!). Read/write and even drift over time ( as the level amps. start to drift, new & old data charge levels starts to drift wider apart)

      One thing they don't mention is that you can loose a complete chip of data if the read amps go out of spec .

      MLC is potentially the kiss of death for your data, they store 4 different binary levels in 1 cell based on charge level rather than simple 'binary'.

      So rather than storing '1 or 0' they can store '00','01','10','11' in a single bit, which means your differential read/write amps have to be SPOT on to clearly distinguish the 4 levels, if the amps. go out of spec…. so does the data.

      It is VERY hard to loose a complete disk drive of data( relatively speaking), and there is usually a warning, this is not the case with Nand flash.

      Worse is that some 'scumbag' companies are deliberately selling defective product into the market under a 'special' brand name, unfortunately their product and details are covered under NDA.

      All I can say is that I was completely shocked at what crap is ending up on the market, some of the product is only good for 50-100 writes, ( mainly crap from China, BUT the dies are from 'reputable' known companies, so the internal die ID's read as 'quality' product when queried electronically, [you can ask a Nand chip 'who made you'])

      1. John Tserkezis

        Re: Endurance?

        "[you can ask a Nand chip 'who made you']"

        Remember, there are fakes around. Writeable CDs and DVDs are plauged by this problem, you read the manufacturer ID as someone reputable, but in fact, it was made in someone's backyard.

        I've seen fake transistors (low-spec q's replaced the proper silicon), fake ICs (that actually did nothing), Even seen tops of ICs sanded off with new "regular label" printing on top. They worked, but you had no idea what black magic (or running hamsters) were inside - especially not why it was "rebranded" in the first place.

  2. NickHolland

    more than one way to fail ...

    They seem to feel there is only one or two ways flash fails -- excessive writes on a cell and fading of data. By this measure, CPUs, RAM and other electronic chips should NEVER fail, as they are subject to neither. Real world shows otherwise, of course.

    I've heard too many reports of flash and SSD failures way too early for excessive writes and data fading. I'm not seeing anything that makes me trust SSD more than spinning rust, except in physically harsh conditions. (sadly, I'm not sure new hard disks are going to live as long as older tech drives had in the past.)

    Ten+ year storage guarantee? That sounds like the floppies that were made and sold with "lifetime guarantee", I'm sure there's NO assistance in recovering your data, NO liability for the lost data. Manufacturer suspects that in three years, you would rather buy a new drive with several times the storage rather than get your old drive replaced. I suspect this guarantee is rather hollow. And ... how do you tell how long something you invented last month is really going to last in the real world? Yes, you can tell me all about aging tests...but we have seen how well those actually work in real life.

    And let's not forget tin whiskers and bad capacitors...seems not much is lasting very long anymore.

    1. razorfishsl

      Re: more than one way to fail ...

      That's the problem, there are too many salesmen with their bullS**t.

      There is massive amounts of peer reviewed data on Nand flash failures, and it is not only related to 'wear out', or 'fade' but also read/write disturbs.

      That's why I consign any 'non-peer reviewed data' to the WPB ( waste paper basket)

      Examples of 'other' failure modes.

      'Techniques for Disturb Fault Collapsing'

      'Program Disturb Phenomenon by DIBL in MLC NAND Flash Device'

      ' Study of Stored Charge Interference and Fringing Field Effects in Sub-30nm Charge-Trapping NAND Flash'

      'Reliability Issues and Models of sub-90nm NAND Flash Memory Cells'


  3. ifadams

    More to the story....

    So this work had its precursor in a tech report which addresses *some* of the concerns folks have above

    The idea being that there are a lot of small benefits that add up to flash or other SSD technologies being a better medium for longer-term storage than disk or tape in certain scenarios. It also doesn't assume a completely unplugged drive, but rather a lightly or self managed one that periodically self checks and audits the data, refreshing as necessary.

    On the write endurance front, if the data isnt written that often it becomes a moot point. Even crappy SSDs have endurances of at least 10s of writes, which with proper wear leveling is a ton of overwriting for a drive in archival scenario.

    Read disturb can be an issue, as repeated reads can have a weak programming effect.

    Experimental data is pretty sparse on retention times of flash, but I've seen a few papers claim 10+ under ideal circumstances. But, like I said earlier, the authors really dont assume totally neglected media.

    1. razorfishsl

      Re: More to the story....

      I would have to take exception to at least one of the points you make reference to in your paper

      'Using Storage Class Memory for Archives with DAWN, a Durable Array of Wimpy Nodes'

      "Even assuming data is overwritten daily, it would take over 25 years for a conservative write endurance of 10,000 cycles to be ex-ceeded [9]. Of greater concern are the issues of read dis-turb and data retention. "

      The only mention I can see of 10,000 is related to 'latency' tests[9].

      'Empirical Evaluation of NAND Flash Memory Performance'

      Which goes on to state ( in the next paragraph):

      "Due to the high variance of the measured endurance values, we have not collected enough data to draw strong inferences, and so report general trends instead of detailed results."

      More of an issue, it the fact that since this was a 'latency' test for device speed, the writes & reads would have been in a highly compact burst on a 'new'ish chip. ( even the de-soldered devices)

      More worrying…

      They[9] state they measure "3.2 Endurance " by:

      "Program/erase endurance was tested by repeatedly pro-gramming a single page with all zeroes, and then erasing the containing block. Although rated device endurance ranges from 10^4 to 10^5 program/erase cycles, in Figure 5 we see that measured endurance was higher, often by nearly two orders of magnitude, with a small number of outliers."

      So basically this 10,000 writes was performed in a burst with values of 00 & FF ( Nand flash erases to FF)

      which is not a true test of an MLC device, since the test is only testing 2 of the possible 4 states the cell can store AND the test is angled to minimize read/write disturbs from adjacent cells, not to mention the two BEST values for the read/write amps. to pick out. ( I say that because the author appears to be fully aware of how MLC devices function(2.1[9]) but uses a 'non-standard' representation for his test data all '1' or '0')

      There is also no mention of:

      1. The Block number they choose, in their 'single' block test ( that result seems to make me think it was block 0, which all manufacturers give the highest R/W rating to)

      2.The ambient conditions the tests were performed at.

      3. No mention of the Read ID's of the chips tested in [9]. ( manufacturers part numbers on the case are NOT an indication of the enclosed die, they might have been all from the same manufacturer)

      I would 'like' to have seen the endurance data 'test' performed with a range of test data

      1. 'True random data'

      2. 'Marching ones'

      3. 'Marching Zeros'

      Really I would have expected a far better testing regime from the paper[9], I would have some concerns about the conclusions.

      1. ifadams

        Re: More to the story....

        Good points, but I think the base point the authors were making still stands. A low rate of data refresh (even assuming a total overwrite of an entire device every month, which would be unusual in an archival environment and or assuming very aggressive checking and data refresh) it will take a *very* long time to reach the max program erase cycles. More anectdotallly, (can't remember the source, apologies) Id seen some empirical tests that suggested that the max program erase cycles even in MLC architectures are pretty conservative. Not something to rely upon certainly, but still not hurting the base point of "Hey, we're gonna take a loooooong time reach that in a write-once/maybe scenario".

  4. Conundrum1885 Bronze badge

    Re. Flash

    Says Me "I lost a bunch of data to cheap CDRs and DVDRs" so everything important is getting backed up to microSD cards and the cloud now.

    So far haven't had a single failed card but have seen numerous DVDs go bad in the same time, this includes pressed films.

    Hard disks: same issues including one particularly nasty fail where the drive "seemed" fine up until I tried to read back the data from it at which point it became obvious that the head amplifiers were toast on all but the first platter.

    Maybe tin whiskering but this can't happen in resin encapsulated chips according to current theory.

    I did look into whether retrieving the data from the "bad" disks was possible using a green single mode diode laser but (a) they are crazy expensive, and (b) it would be a major undertaking to modify the optics and need a very old drive with separate diodes and space to work.

    Green might work as it is close enough to red that the central region of the pits should read but not so far that it causes further damage during read or interacts with the dye.

    Re. flash endurance, I have even "resurrected" totally dead as in unreadable-on-anything microSDs off Ebay using a soft X-ray generator made from a 5642 with a gas igniter based impulse generator and they did indeed come back to life, despite having some corruption the majority of the data was readable.

    anyone interested in a Kickstarter?

  5. Dazzlingsincethe60s

    Second copy, second copy, second copy

    If it's archive it's highly likely to be your primary data. Same protection *should* rules apply as for active data, replicate or backup second site, or offsite.

    Confirm readability of a sample per media unit at regular intervals, re-create from copy (fairly unlikely to have gone bad, but I concede not impossible) - basic good practice.

    Nothing lasts for ever, although there are a large number of people who would argue Elvis will.

  6. dieseltaylor

    In the terms of what they were measuring there may be a case. However I am concerned that the headline term of flash versus disk may beseen by the more general public as an endoresement of the technologies for , if you like, home archiving.

    I am talking about the weddings photos, the holiday phot0s, the research data, lovey emails that we may wish to keep. I have looked at the Gold DVD's but have place my trust in the M-disk as gold being perpetually expensive may become more uneconic whereas M-disk if mass adopted will become cheaper. It also claim 1000 years as opposed to 100 years. If we accept both cannot know for sure and sya they are both 80 wrong I am still better off with Mdisk.

    Incidentally I was cruising with a rather brilliant software professor [with a patent to his name] who had 73 DVD's relating to his research work around the world. The pain of keeping them renewed was not insignifiicant.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019