back to article HGST: Nano-tech will double hard disk capacity in 10 years

HGST, the Western Digital subsidiary formerly known as Hitachi Global Storage Technologies, says it has developed a method of manufacturing hard-disk platters using nanotechnology that could double the density of today's hard drives. The new technique employs a combination of self-assembling molecules and nanoimprinting, …

COMMENTS

This topic is closed for new posts.
  1. ecofeco Silver badge
    Meh

    When?!

    A few years ago, HP had a proven method for putting 1TB of flash memory in a one inch square about the thickness of a credit card.

    A terabyte.

    By few I mean almost ten years.

    I'm not holding breath any time soon.

    1. Steve Brooks

      Re: When?!

      I suspect its not about being able to do it, it's about being able to do it at a price that's competetive with current storage methods, look how long its takes SSD's to get a foothold on a market full of spinning bits of metal, to actually succeed it needs to be purchased by your average joe in your average computer.

      1. guyr

        Re: When?!

        That's the part I don't get. I'm an (adult) software developer who doesn't play games, collect movies, etc. I have a 1 TB disk that is still over half empty after a year. Yet today if I were so inclined, I can purchase a 4 TB disk. What is the market for these monsters? Undoubtedly, companies like Facebook, Google and Amazon probably have huge buildings full of storage units for their billion customers. But how many of those companies are there? And can they support a storage industry by themselves without consumer volume?

        I just can't envision consumers needing anything over 2 TBs. Maybe I'm the odd one out and everyone has a library of 1000 movies.

        1. Esskay
          Mushroom

          Re: When?!

          I also have a 1TB drive - and I've spent the last couple of years constantly deleting anything I possibly don't need in an attempt to make enough room for the next piece of data to go on (I'm a cheap-arse, waiting for prices to drop a bit more before a big upgrade... been telling myself that for a couple of years now). Whilst I'll admit a lot of that is my personal DVD movie & music collection ripped onto the drive, a lot of it is also videos I've taken myself - GoPro footage, DSLR photos, compact camera video and photos from holidays, family, friends, etc - everything from the last 10 years. If I was a professional photographer I could see 1TB being a fraction of what I needed to survive.

          A large hard drive is a bit like a big garage - you could survive with less space, by carefully arranging everything to fit, but it's a lot more convenient to have a bigger space than you need so you can just bung everything in there and sort it out at a later date.

          <= Nuke, because it looks like one's gone off in my garage...

        2. P. Lee

          Re: When?!

          PVR usage eats disk, especially at 1080p. I have mostly SD recording and it eats well over 2TB and would take more if available.

          I also like to keep DVD iso's around in case the DVD ends up sitting in a puddle of water or gets scratched, and then I like to convert to h.264 and/or high-quality mpeg4 for tablet/steaming consumption.

          There are various family events recorded in rather high-quality formats and backup spaces for the main desktop computers - the more space, the longer backups last.

          I also scan all incoming snail mail - if it goes to tiff, that's a lot of space.

          My server also runs squid which is not just a browsing cache. It also holds all those RPMs and .debs for system updates for suse, debian and ubuntu. There are also some BSD ports in there too, not to mention some ISCSI disk images for network booting testing.

          There area probably also a couple of emergency dumps of a computer with a failing disk.

          Add a RAID system and you lose space, most people go strip & mirror rather than RAID 5, so you lose half your space.

        3. Lars Silver badge
          Coat

          Re: When?!

          It's amazing indeed how soon a 1 TB disk is nothing at all as soon as you have some movies, some music and a digital camera (or several). As for spinning disks (among the messages later) I think it's a bit like tape, alive still, and why not. I would wish punch cards and tape to a happy and well deserved grave, though.

    2. Alan Brown Silver badge

      Re: When?!

      1Tb in a credit card is no use if:

      1: the storage medium is expensive or unreliable

      2: The reader/writer is expensive or unreliable

      3: the reader/writer is the size of a house.

      4: the storage medium is fragile (if it can't be flxed, then it's no use)

      IIRC the HP prototypes weren't anything near 1Tb and that size was a "potentially" one.

  2. Anonymous Coward
    Anonymous Coward

    You really think we'll be using stupid spinning discs in 10 years time?

    10 years time will be 2023. I somehow doubt that a quarter into the 21st century we'll be using much in the way of spinning storage.

    1. Martin Huizing
      FAIL

      Downvoted for stating the bleeding obvious?

      If they would say 3 or even 4 years (of course they already calculated the storage capacity increase of plates over a certain amount of time), I'd be buying 8 terabyte disks for my movie collections. But at the end of this century we will most certainly see SSD drives cheap enough and big enough to drive (haha) platter driven drives completely out of the market. Hitachi is shooting themselves in the foot if they don't focus on flash memory. Imho

      1. Lars Silver badge
        Coat

        Re: Downvoted for stating the bleeding obvious?

        Flash memory, but how reliable is it and for how many years (to day). I hope very much flash storage was not a big mistake with Curiosity.

        1. Intractable Potsherd

          Re: Downvoted for stating the bleeding obvious?

          Yes - I worry that manufacturers might see Flash drives as a way of putting effectively time-limited storage into devices, and this making them consumables that need to be replaced, and therefore generate revenue. In my many years of computing, I have never had a hard-drive fail on me, and one that is at least 15 years old still resides in my desktop, functioning happily.

          (Of course, now I've gone and put the jinx on things, and every HD I own will pack up in 5 ... 4 ... 3...

    2. Adam 1

      really?

      100x the density also means 100x the theoretical throughput.

      Different technologies have different characteristics that are good in different cases. SSDs for example are very good at not damaging if dropped. They have very fast seek times but write speed and MTF is far less impressive.

      I have no idea if hard disk will go the way of the zip drive or not, but even if no windows pc ships with a spinning disk it is a bit unimaginative to ignore the whole technology.

      1. Eddy Ito

        Re: really?

        I gotta side with Adam. If somebody told me 20 years ago that I'd be driving a Honda Fit with a fuel injected 1.5 litre engine and the 30 mpg it gets is only 30% better than the 23 mpg I got with my 1969 Mustang ragtop with an AFB on a 5 litre V8 I'd have said they were nuts but there it is.

      2. Steven Jones

        HDD performance scaling with capacity

        @adam 1

        100x the (areal) density does not mean 100x the performance on disk. As 100x the areal density corresponds to 10 x the linear density, then sequential throughput increased by 10x. As areal density hardly improves random access at all, then the IOPs is virtually identical (and IOPs are controlled by mechanical factors, and it's generally recognised that these are already close to the limits of what can be achieved for reasons such as power consumption, material stability, bearing reliability etc.) Such mechanical issues are subject only to marginal gains.

        What's worse, is the effect on IO access density. That is the number of IOPs per GB. For random access, this gets 100x worse for a 100x the capacity. The sequential access speed per GB stored gets 10x worse. Already it takes several hours to read a full disk in the 2-3TB region. For a 2-300TB drive the time taken would be measured in days. This would mean that a rebuild of a RAID set using such drives would take many days - it's bad enough with current disks in the TB region.

        The inherent problem with physical disk drives is one of basic geometry when combined with physical material limitations. No doubt there are some types of data where access requirements are low enough that this will be tolerable, but it is already an major problem.

        As far as flash write performance is concerned, it may not be as good as read performance, but it is still incomparably better than physical drives, especially when combined with intelligent controllers, write caching etc. (albeit HDDs benefit from this too).

        nb. in the unlikely event that 100x the areal density was achievable, it's most likely that this will result in smaller form factor drives, the continuance of a current trend. However, costs do not reduce linearly with reduced form factor as individual unit complexity remains high, and its inconceivable that there will be (say) 10x the number of physical units in the space capacity of a current one.

  3. Anonymous Coward
    WTF?

    Only DOUBLE the density?

    I'm glad that we've been able to increase density faster than that in the past, or the gigantic 1GB 5.25" hard drive I remember costing a group I was affiliated with $2500 in late 1992 would now provide all of 4GB.

  4. John Savard

    Without the Breakthrough

    If they couldn't switch the nanodots from rectangular to radial, I guess we could always go back to drums from disks.

  5. John Brown (no body) Silver badge
    Boffin

    are just scratching the surface

    I don't think that's really what they ought to be doing with HDDs.

    On a more serious note, I wonder if there's some technical/performance reason for not having more than one r/w head per surface? I'm thinking of either two head mechanisms in opposite corners of the enclosure, one for the outer half of the surface, one for the inner half. Or a windscreen wiper arrangement of two or more heads. Or just a Y shaped head arm with two heads. This would reduce the largest delays caused my the mechanical track stepping. Or maybe go back to the idea used in the original drum storage or line printer technology of one head per track. I suppose there must be good reasons why none of the above have happened yet.

    1. Anthony Hegedus Silver badge

      Re: are just scratching the surface

      Interesting idea - why not have three read heads? By clever arrangement of data, you could achieve 3x the throughput maximum, and more importantly, 3x lower seek times. Defragging's going to be fun!

      1. Steven Jones

        Re: are just scratching the surface

        There's a very good reason why multiple read/write heads are not available on HDDs (and it's been tried) and that is because the vibration introduced by moving one read/write head disrupts the others. I'd also imagine that given that the heads "fly" incredibly close to the surface of the disc, that aerodynamic interference could also be an issue.

        Multiple read/write heads were used in the dim and distance past on a type of disk that used fixed heads (rather like an alternative to drums). These were used as paging store on mainframes back in the 1970s, but were inherently very expensive and had low capacity - even compared with moving head drives of the same era. I have a vague recollection that ICL's ill-fated CAFS (content addressable file system) of the 1970s made use of multiple read heads. It used logic at the disk head controller level to perform searches on data content, but improvements in processor speed meant it was a commercial failure.

        (nb. the integration of search logic into disk controllers was once commonplace in the form of CKD - count-key-data drives which could embed certain searchable data into key fields before every data block. Typically this was used for things such as index data for indexed sequential files, and the programmes to search for such data could be despatched against a channel controller using a very limited and special "channel program". To this day, IBM mainframe disk controllers have to emulate this function as "legacy" access methods require it. The norm used to be that programs did not access storage by going through a file abstraction layer, but that they assembled the channel program directly as this saved CPU cycles. This still happens on "legacy" programs,. but the O/S has long had a role in "vetting" the channel programs for security reasons.

        CKD techniques have long been replaced by software and logical block addressing, but the traces still remain)...

    2. Lars Silver badge
      Pint

      Re: are just scratching the surface

      "I suppose there must be good reasons why none of the above have happened yet." I think so too as I am sure those ideas are nothing new.

  6. Visual Echo
    Meh

    Is that the best you can do?

    You better make it quadruple within 4 years or your competition is going to clean your clock.

  7. Anonymous Coward
    Meh

    S.S.D.D.

    Peg me as a skeptic because I have read year, after year, after year how some new tech is going to increase storage, yet never does. In fact, when new tech comes along like solid state or whatever, it seems storage space actually shrinks by ~15 times.

    In 20 years drives will run at the speed of light, with the capacity of 1.44mb. GRUB will load fantastically!

    1. Anonymous Coward
      Anonymous Coward

      Re: S.S.D.D.

      Capacity has gone from about 4 megabytes in a fridge sized enclosure to 4 terabytes in something you can carry in your pocket.

      1. Lars Silver badge
        Pint

        Re: S.S.D.D.

        From 4 MB to 4 TB, very true indeed, and the difference in price is fantastic. Of course the demand for storage has gone up too. Lots of memories of the fridge sized drives 2 x 10 MB then.

  8. Anonymous Coward
    Anonymous Coward

    yeah, yeah, yeah

    I read: could double,

    and I see:

    "could"

    and

    "up to double"

  9. Dave Bell

    Reliability

    From my own experience, I am coming to think that the old disks were more reliable than the current tech. If there is a reduction in reliability as data density increases, there is one of those awkward engineering tradeoffs.

    How does a home user back-up the current generation of drives? Will people fork out the cash for a RAID array in their laptop? How do you sell multiple drives. How do you support them?

    1. Lars Silver badge
      Pint

      Re: Reliability

      You cross your fingers and pray.

    2. Martin Huizing
      Trollface

      Re: Reliability

      So I take it you never owned a Barracuda drive ?

  10. Suburban Inmate
    Boffin

    Admit it...

    You saw the pic and looked for the autostereogram.

  11. FutureShock999
    Boffin

    One of MANY dimensions of improvements...

    I think the important thing about this announcement is that the are saying this is ONE dimension that they can exploit - the can also deploy this with other types of improvements to give even greater gains, but THIS one dimension of improvement will give a 100% increase. Add in other dimensions of improvements, and you might get quad density improvements, much faster seeks, etc. Tech is like that - it usually seems to need many things making small gains that add up to much larger ones....

  12. Alan Brown Silver badge

    multiple heads

    Are already there - one per platter surface.

    Theyr'e not even multiplexing those, so there are definitely deeper issues involved than you might think (Most likely down to calibration being unique to each platter surface, so the only way to operate is to operate one head at a time.)

This topic is closed for new posts.

Other stories you might like