back to article Flash DOOMED to drive itself off a cliff - boffins

Microsoft and University of California San Diego researchers have said flash has a bleak future because smaller and more densely packed circuits on the chips' silicon will make it too slow and unreliable. Enterprise flash cost/bit will stagnate and the cutting edge that is flash will become a blunted blade. The boffins …

COMMENTS

This topic is closed for new posts.
  1. ravenviz Silver badge
    Go

    "revolutionary technology advances"

    They will happen.

    1. Anonymous Coward
      Anonymous Coward

      Re: "revolutionary technology advances"

      Indeed.. I may have heard something from a company named "Anobit" perhaps?

      1. Kristian Walsh Silver badge

        Re: Anobit

        Anobit are a NAND Flash developer, specificially the fiesystem translation layers that are used in all Flash disks to improve performance and to stop the storage cells dying prematurely. Their claim to fame was being able to produce higher performance from cheap, commodity Flash chips, to allow these to be used in enterprise environments. (The corollary of this, and probably why Apple wanted them, is that their technology also allows a manufacturer to reliably use really cheap flash memory).

        Completely new technologies will be needed to address the brick-wall in NAND-flash performance, because the current limit is a consequence of the physics used to implement NAND memory: as you increase density (=capacity), there comes a point when the "cell" is no longer big enough to hold onto its charge anymore.

        This problem is inherent to NAND flash, and cannot be solved with an Apple's kind of "revolution": re-packaging, or re-configuring existing NAND designs, or just telling people that it's still better. Instead, it'll need a real technological revolution: replacement of one basic technology with a new, better one.

  2. Captain Underpants
    Paris Hilton

    I could be wrong, but....

    ...is this not just Flash storage, like all other technology ever invented, has limitations dictated by the physical properties of the materials used in their manufacture?

  3. Tom 13

    I think I've heard this song and dance before in a different venue.

    Only back then we were going to hit the wall without revolutionary breakthroughs in magnetic disk technology, and we should never expect to get past them. At the time I think my Big Ass Drive was in the neighborhood of 200M. I don't expect the SSD guys are any less inventive.

    1. Mike Powers

      Exactly

      Technology is always "just about to hit the upper limit of capability". The world responds by inventing a new technology.

  4. Armando 123

    They may be right, but ...

    Never underestimate the ingenuity of geeks with insight + businessmen who can sniff profit. That is one potent combination.

  5. NoneSuch Silver badge

    Even if the figures are accurate as shown in the charts (Looking into the future is hazy at best) the answer would be to put a hefty chunk of RAM as a buffer for front end performance while the back end stores the info at a lower rate.

    I am with ravenviz and Tom 13. Technology has a way of advancing past problems. Sure, it will create more issues somewhere else, but that is the nature of the beast. Got to leave some things for the kids to figure out. ;-)

  6. Anonymous Coward
    Anonymous Coward

    This is about NAND flash.

    Last I checked there were three of four competing technologies waiting on the sidelines. With luck some of them will scale better.

    And a nitpick: Apparently "multi" means "two" now, with "TLC" for three-level. Can we please make up our minds and use, say, SLC/BLC/TLC instead? Or at least come up with a convincing TLA where the M stands for two in some obscure language or orther?

  7. AndrueC Silver badge
    Joke

    Flash! Awwwwww. Not the saviour of the universe.

  8. SiempreTuna
    Mushroom

    D'you think the sub ..

    .. had a particularly heavy (liquid) lunch?

    Or was this one just too boring to actually read?

  9. The Grump
    IT Angle

    Quantum storage

    IBM boffins have been working on storing data using individual atoms on solid media. Quantum storage boffins promise more storage, with an unbreakable security system (cannot access quantum data without changing it). It's coming - with a big fat profit margin for the lucky company that brings it to market.

    "Patents are no longer needed. Everything possible has already been invented". "If man was meant to fly, he would have wings". And of course "I'm from the government, and I'm here to help you". Does mankind ever tire of being wrong ?

  10. Nu11u5
    Stop

    Technology hits technical limitations - waiting for new techniques to overcome them. News at 11.

  11. Tony Rogerson
    WTF?

    All technologies hit a wall - look at 15Krpm disks

    15Krpm disks have been around for over a decade and we are still only on capacities of 300GB on a 2.5" drive and 900GB on a 3.5" drive; you need at least a dozen 15Krpm drives to compete on a 50/50 random read/write work load with a typical SSD; even then the disks just can't get the data off quick enough.

    Phase Change Memory is my bet.

    T

  12. Anonymous Coward
    Anonymous Coward

    SSDs great idea, but...

    Why could they not have invested more in developing everlasting drives, or is it a "mythical everlasting lightbulb" marketing trick to ensure constant sales?

    I use SSDs for thumbnail storage of images for a social networking site, and long term cached files, where read-writes are massively asymmetric. For this they are great.

    However, I would like to use a RAID set of them for the databases, but with millions of writes per day I fear I would have to replace the drives every week even with wear levelling.

    Therefore I stick with a large bank of trusty traditional drives where lifetime is measured in MTBF and not write cycles.

    I guess they are OK, if you have RAID array with a hopper feeder...

    1. Tom 38

      Re: SSDs great idea, but...

      ZFS's raidz (available for Solaris, FreeBSD) allows you to have masses of online storage on slow disks, and accelerate reading and writing by adding cache and log devices on SSDs. It gives you all the write speed of an SSD, all the time knowing that your data is backed up and an SSD can be removed or replaced without data loss or issues.

      Read speed is a little more problematic. Actually thats BS, sequential read speed is excellent, IOPS suffers if the data is not already in the cache stored on SSDs, and for significant loads you would want as much cache as your working set.

      FWIW, on my home filer I have 6 'EcoGreen' 1.5TB drives - aka the slowest cheapest drives I could find - accelerated with one 60G SSD split in two, with 30G for cache and 30G for write log, all using onboard SATA. I get sequential read speeds of about 400MB/s (in cache) and 550MB/s (not in cache), and write speeds of 400MB/s (I've never managed to overflow the intent log).

      I guess what my long winded post is saying is that SSDs are genuinely useful, but I only see them as accelerators and cache for real storage.

    2. Tom 13

      Re: SSDs great idea, but...

      Selecting the right drive for the job at hand is part of the job. I'm just against doom-mongering in areas where we have historical evidence of ability to innovate. And storage technology is one of those areas.

  13. Anonymous Coward
    Anonymous Coward

    We need a change on file systems

    We need a change on how file systems are implemented. Back in the old days, storage was a file of small sectors (ca. 512 bytes) that could be written largely independently of one another. File systems were designed around that idea.

    It is no longer true at the physical level: your basic modern hard drive will re-write the whole track when modifying a sector, since the sectors are so close together that trying to rewrite just one would very likely slop into the next. And if you are running RAID it just gets worse, as you now should really be working in quanta of a RAID stripe. Flash is the same: you have rather large erase blocks that it would be ideal to write as a unit. However, we keep forcing the physical layer to pretend it can write each small sector independently of the others.

    Worse yet, the overhead of fetching a small sector vs. grabbing a huge block of data is killing performance on things like SATA, SAS, and PCIe-connected storage. You can spend as much time on overhead as you do on the actual data.

    What we need is to have file systems that are designed to work in arbitrarily large blocks - e.g. a single Flash erase block, or a track of a hard disk, or a stripe of an array - and deal with things like file packing to make good use of that. We need the OS to be smarter about grouping updates into blocks. We need the physical layers to CORRECTLY indicate their ideal block size, and to transfer those blocks efficiently. Move "flash translation layer" OFF the media, and into the OS, which has the information to optimally access the data.

    Do that, and we can continue to see improvements in flash density (as you can remove much of the overhead of the FTL).

    1. Anonymous Coward
      Anonymous Coward

      Don't we already have that?

      AFAIK most if not all unix-y filesystems come with some way to tune the block size, and ensuring block boundary alignment isn't too hard to fix. As an example, the venerable Berkeley Fast File System comes with choosable block sizes and "fragments" to provide small file (or large file tail) packing. Read up on it if you don't know, it's not redmondian fragments of the needs-defragmenting kind I'm talking here. These fragments are a feature, not a bug. If that isn't what you were after, what then?

      Besides, it's the erasing that's the problem, not the writing. And that can be alleviated, at least in part, by making filesystems tell the drive what they no longer need (TRIM) and lots of cleverness in the SSD controller. That controller also needs to take care of wear leveling, which is possibly even harder and requires some background shuffling of data anyway.

      If anything, I'd say filesystems need to trust the controller more, not less, and stop optimising for now-outdated characteristics. Instead, simply treat the storage as one big sack of writable blocks. The block numbers have as much relation to physical relation as virtual addresses have to physical memory addresses, these days. Even spinning disks patch up bad blocks from reserve areas resulting in blocks getting shuffled way across the disk. And the OS doesn't know squat about what clever things the SSD manufacturer came up with this week.

      Not unless the drive tells it, that is. So they sprout new interfaces to show and tell, meanwhile lying through their teeth over the old interfaces, because if they don't, all the old software up and gets all confuzzled. That is hardly an argument to move more low-level logic to the same easily confuzzled software, now is it?

  14. a_mu

    cache

    what would happen if a ssd was made out of different speed cells.

    one big lot, slow, but small,

    one smaller lot faster and bigger,

    and one lot very fast, but big.

    you add a cache controller system,

    and I'xd have thought that there is a long way to go,

  15. Bill Gould
    Meh

    Hybrid SSD

    There are already a few being done as PoC or research but a hybrid that correctly places traffic/data into the correct medium is a decent stop-gap. At least until they figure out quantum storage or magnetic field storage.

  16. Anonymous Coward
    Anonymous Coward

    M.A.N.T.S.H...

    Stats firming up what we already knew and new technology waiting in wings anyway.

  17. Tasogare

    Rather than increasing density...

    Rather than increasing density, couldn't you get more space out of an SSD just by allowing a larger physical size? Say put one in a 3.5" form factor instead of 2.whatever. Pretty sure I could fit three 2.5 ones in a 3.5 shape. I doubt you'd get an order of magnitude more space with the same tech, but doubling shouldn't be out of the question.

    Newegg comes up with about a half dozen of these, so clearly someone's thought of it before. I'm not sure why it's not more common. Desktops aren't that dead yet, are they?

    1. Sorry that handle is already taken. Silver badge

      Re: Rather than increasing density...

      Increasing density reduces cost by reducing the amount of silicon required.

  18. Anonymous Coward
    Anonymous Coward

    Flash doomed to drive itself off a cliff.

    And there was me thinking the headline was about Flash rather than flash.

    /gets coat

  19. Anonymous Coward
    Anonymous Coward

    memrister

    see above.

  20. Anonymous Coward
    Anonymous Coward

    Irrelevant

    So making flash denser (i.e. cheaper) also makes it slower. I fail to see the problem. Today we have the same thing, the slow cheap storage is called "hard drives". In the enterprise world we may end up with two tiers of flash, one that's dense, cheap and slow, and another that's less dense, less cheap but faster.

    Look at the comparison to the world of enterprise storage before SSDs came onto the scene. We had two tiers of storage, large cheap SATA drives that were slow (100-150 IOPS on a 1 or 2GB spindle) and small expensive 15k rpm SCSI/FC drives that were "fast" by comparison (300-400 IOPS on a 300GB spindle) Basically the expensive stuff had 10x more IOPS per gigabyte.

    You don't need much difference in performance between slow cheap flash and fast expensive flash to meet or exceed that 10x IOPS per gigabyte difference that everyone used to think was so big and easily worthy of tiering.

    I believe 15k rpm drives no longer have any role in enterprise storage, it makes more sense to have two tiers - flash and SATA since the SCSI/FC tier hardly performs better than the SATA drives while having less capacity, and are basically the same capacity while being orders of magnitude slower than the flash drives, but don't compensate for this by costing orders of magnitude less.

    Perhaps we may still have the 3 tiers of storage EMC salespeople keep trying to push on unsuspecting buyers (perhaps to unload their huge stock of now useless 15k rpm drives) Except it'll be one tier of fast expensive SSD, one tier of dense cheap SSD (compensating for the lower lifetime via massive overprovisioning internally, the modern equivalent of short-stroking) and the third tier of 7200 rpm SATA for bulk data.

  21. min

    so new tech makes Flash too slow and unreliable..

    ...so does that mean it will not go up in a puff of smoke after a flash of light and rather just fizzle out slowly?

    kinda disappointing.

    i'd get my overcoat, but i like to flash.

  22. CastorAcer
    Happy

    NAND Flash Memory Future Not So Bleak After All?

    I came across rather an excellent article this morning that gives a slightly different view in reaction the the paper.

    http://pcper.com/reviews/Editorial/NAND-Flash-Memory-Future-Not-So-Bleak-After-All

    It makes for interesting reading as while it doesn't deny that there are limitations to NAND Flash Memory it does challenge some of the assumptions in the paper and asks some questions of the motivations of the authors.

  23. PeterM42
    Facepalm

    Not so reliable?

    Having supported PCs since the 1980's, I was surprised that I have had to do OS rebuilds on BOTH my Granddaughter's netbooks which are fitted with SSD's. Necessary because of apparent disc corruption. I'm not saying it WAS because of the SSD's unreliability, but Dell replaced one of them under warranty in addition to my rebuild work.

  24. Michael Wojcik Silver badge

    Bah!

    They said the same thing about drum memory, but my Univac FASTRAND still works great.

  25. Spotswood
    Unhappy

    Damnit

    I read the headline and thought they were talking about Adobe Flash :(

This topic is closed for new posts.

Other stories you might like