I could go NSA on the family
take two, back up everyone's precious photo and home movie collections, copy to the second drive for safe keeping.
Seagate is shipping an 8TB disk drive to selected OEM customers including object data-storing CleverSafe, with general availability next quarter. Tech details are sparse, however. We know the data devouring beast fits in a standard 3.5-inch drive slot and has a 6Gbit/s SATA interface. Seagate says it has “enterprise …
As someone who, over the course of one year, had *5* seagate drives fail (3 being a newly purchased drive barely 2 weeks old, as well as its *2* replacements), I will never purchase Seagate again.
They used to be the bees knees for quality and reliability. Now all the seem to care about is putting out the biggest drive soonest, and reliability be damned.
I urge people to avoid Seagate, especially their shockingly unreliable Momentus XT hybrid drives.
I have a pair of momentus XT - they aren't unreliable, just slow (but faster than comparable laptop drives)
Unfortunately the _only_ maker who doesn't have abysmal reliability stats is hitachi and they've been borged. Spinning disk will die quickly once flash gets large enough.
A 8Tb enormodisk would be best suited as one of the SSHD range. 7200rpm is likely to be too fast for these things to handle.
"Spinning disk will die quickly once flash gets large enough."
The economics simply aren't there and won't be for the foreseeable future. Flash either cheaps out but loses longevity or sticks with longer-term chips that are an order of magnitude more expensive per TB. For bulk storage that must still be randomly-accessible, there's no substitute for spinning rust. Otherwise, said alternative would be in the consumer sphere as a backup medium (tape's currently enterprise-oriented and too expensive while opticals are too small a capacity relative to today's drive capacities--it would take around 20 dual-layer BD-R's to store the capacity of a 1TB hard drive and those discs will inevitably have longevity issues). Frankly, I would LOVE to see something other than spinning rust as a medium-term consumer archival medium, but I'm not seeing it.
"The economics simply aren't there and won't be for the foreseeable future."
The cost per IOP of SSD is ALREADY much lower than other disks. NAND Flash memory prices are reported to be on the decline with roughly a 30% drop in 2013 and another 20 – 30% further projected this year. By 2015 SSD will be cheaper per GB than SAS. Spining rust disks are going to die - and will be completely gone within the next 10 years.
For Flash to be viable as a consumer backup medium, it has to beat SATA and USB, carrying slow but large spinning rust. Right now it's 4TB for $150. How close is Flash to this, and what's its longevity, both in terms of write cycles and in terms of offline shelf life (I keep hearing of flash bit rot)?
Well that sounds fair. When shipped they are presumably NOT spinning and therefore parked so they ought to be able to handle a fair amount of shock. If they aren't robust enough to reliably store your important data, they aren't fit for purpose. Simple as.
To be fair, I went with four 1.5TB WD Blacks two years ago without even trying the Seagates deliberately because of the gripes about failure rates. My Blacks came shipped in literally nothing but a cardboard box and some rigid cardstock spacers and I haven't had a hiccup yet.
You have one post and a pretty Generic Username. You used that one post to... oh, accuse a company of hiring people to post on a site's forums. Those same people that have more than one post and go back months or years. Carry on, nothing to see here.
I've personally had trouble with Seagates in the past, both internal and external. They'll run for a while and then start going herky-jerky, forcing a rapid reallocation. If a drive of mine's going to fail, odds are it's a Seagate. That's why I've stuck with WD drives. I can't recall the last time one of those went pear-shaped.
Can't remember? Lucky you.
I have a collection of WD Greens that all died within warranty. After sending the first one back at my expense, it was replaced with a faulty refurb. Not worth the postage cost to send the others back, given WD is notoriously bad at warranty replacement for consumer drives.
Ill join the band wagon here and say that I haven't used a Seagate drive in over 10 years myself. Absolute crap and I say that having seen them come in day after day after day having issues of one sort or another. Only drive I would want LESS than Seagate is Toshiba. I don't quite understand why Toshiba even bothers with HDDs if the QC is this horrible.
Prefer WD myself. Have had nothing but stellar support from them the one time I needed it (2.5 year old 1Tb Black decided it didn't want to spin up anymore. Luckily I had everything on another drive at that time in prep to setup a RAID5) that said drive BTW was replaced with a 2Tb Black as they were out of the 1Tb for replacement.
Also is it just me that feels old when they see terms like Tb and such being tossed around yet then can remember the days when 5Mb was the shizznit? Not to mention the (in)famous quote "640K ought to be enough for anybody." God I feel old.
The last time I bought a seagate drive it was an external which failed after less than 24 hours time mounted, and less than 2TB of throughput, when it wasn't hooked up to the Mac it was kept in it's original padded box.
Seagate's warranty was "You can have a refurb, not a new drive." and their recovery "service" was an extra £600, 50% more than disk labs were charging at the time.
I haven't bought another seagate since, and have yet to be let down by any of the WD drives I've bought.
How long have you been working for WD? Western Dodgy will only ship you a refurb, that is their policy. And their refurbs generally don't work properly.
You could have got your Seagate replaced by the shop you bought it from. No need to send it back to Seagate since it was brand new.
Costco USA $74 WD 1Tb laptop storage devices, 2ea. used for offsite storage at 2 secure locations...my IP is only 170Gb, would be looking at these things if i had more stuff ( the only music or videos stored are my own )...
IMHO= Yet Again, no matter what, In less 5 Years, i will be needing to replace these new worn out rust storage devices...
caveiat= No One touches my IP but me...the www / ww3 Cloud does not offer me a liveable solution, i simply have to use this type of storage device...RS.
With a disk that big it'll be difficult to use them in a RAID. The rebuild times will be very long.
There's worse though. With disks that big it's pretty likely that if you read every sector on the drive there will be a bit read error somewhere. Rebuilding a RAID mean doing effectively just that; there will quite likely be a bit read error encountered somewhere in the process.
But if you're doing a RAID rebuild you, by definition, are doing so from a situation where the R in RAID doesn't apply - the redunancy was lost when the drive that you're replacing first died. So there is no way to detect the read bit error during the rebuild process, meaning that somewhere in the rebuilt data set there will be a mistake.
Obviously it's a matter of pure chance as to whether or not that bit error is in some of your stored data or in the empty space of the file system. But as there seems little point in having all that storage unless it's actually going to get filled up the chances are that a bit error will occur in your data.
A drive like this is better used in a JBOD set up with something like the ZFS files system from Sun/Oracle. That fs takes into account read bit errors in the way it stores data on disks.
How about parity archiving (PAR files)? As long as they're taken from a pristine source and verified, they should be checks against known-good data. A one-off should be detectable in this case and can be re-read and, if necessary, corrected. Plus you can specify how much redundancy to put in each collection of data.
PAR might work, I dont know how robust it is. However, you need to do checksums calculations every week to see if something has changed. Bit rot you know, as disks grow older they rot so the data gets slowly destroyed. Remember the old VHS tapes that got fuzzier and fuzzier as time passed, or the old Commodore Amiga discs that wont work any more today? Data rots on magnetic media, including hard disks. So you would need to do checksum calculations regularly on every file and compare to the original checksum, so you need to keep a record of every file and its corresponding checksum. And if you edit a file, you need to update the checksum and the record. etc etc. Very tedious and error prone and lot of work, imagine updating 1000 files and updating the checksum file manually. Isnt there any system that does this automatically for you?
Yes, it is called ZFS and it does all of this, and some more, automatically for you. Every time you read a block, it is checksummed and compared to the original checksum. If there is an error, ZFS will automatically restore the data and report the error to you. If you dont use mirror nor raid, then ZFS will catch the error and report to you, but will not be able to repair. No need to do anything, just store and forget and ZFS will handle checksums and everything for you. It turns out that it is very difficult to build a bullet proof storage system, as CERN concludes in research papers: "even very expensive storage systems are not reliable to bit rot" - so I doubt reliability of PAR you talk of, is it really researched?
Research shows that ZFS seems to be way more reliable and safe than any other solution (including very expensive storage systems). This is the reason CERN is now switching to ZFS for their long term storage of all huge amounts of particle data, they need a safe solution. This is the reason people use ZFS: it is very very reliable and safe, much more than any other storage solution. All the functionality; snapshots, extreme scalability and speed, scrubbing, etc etc are just icing on the cake. For instance, IBM supercomputer Sequioa uses high level Lustre and ZFS as their storage solution for their 55 Petabyte and 1TB/sec data.
And ZFS is platform agnostic, people have switched OS from FreeBSD - Linux - Mac OS X - Solaris without problems. All their storage on ZFS can be read on each OS. So you are not tied to any OS, you are free. But remember, you must NOT use hardware raid card when using ZFS because hardware raid will interfere with ZFS logic which means ZFS can not guarantee data integrity anymore. Sell your hardware raid card and just plugin the disks directly to the motherboard or to a simple HBA card. There is lot more information on this, and research papers, on the wikipedia entry of "ZFS".
Nice advertisement for ZFS...except for one thing. It's NOT platform agnostic. Otherwise, it would work natively in Windows.
"An OpenZFS port of code to Windows is not likely in the foreseeable future. The OpenZFS launch discussion on Slashdot touches upon some of the issues."
The closest solution to this is to use another machine and network the drive. zfs-win only mounts the drive read-only. Oh, and I also read that its more robust data-protection features are memory-intensive.
In any event, I'm thinking of the more likely event of gradual deterioration (a sudden catastrophic drive failure is basically game over for anything short of paranoid redundancy, which is not usually the desired setup for a consumer). PAR files combined with strategic physical file allocation (arrange the physical files ext-style, shotgun-like) should increase the odds of a recovery in this scenario.
I don't think so. Just in official circles, what you can find on the Internet probably ranks at least in the tens of TB...and growing. No single drive on Earth has the capacity, and I suspect the amount of porn will keep growing with the drive sizes, making it rather a chase.
10's of TB is a VERY low estimate.
I would put it in the Petabyte range at least.
A quick look at "Recent Releases" on one of the DVD sites lists 1,081 movies released in the last 90 days. At a VERY conservative 15GB per movie average that is 15.8TB in 90 days or approx 65TB a year which is, if we say the industry has done HD for 4 years, a quarter of a PB right there. Add historical digitised content going back 20+ years and amateur/Cam Girl/Red Tube type content and I would say far side of a PB easy.
15GB per HD movie? That's generous. HD net videos shouldn't do more than 2GB per on the top end (BD 1080p rips of 1GB/hr are considered generous, most are half that). And all the retro stuff can be crunched down even further due to the reduced resolution: say 10MB/min or about 600-800MB per. Meanwhile, all the PornTube stuff is even smaller: say 100MB for a decent size/quality clip. That could shrink the whole estimate back to the low end of the PB scale.
WD Drive types:
Green = low power and in my experience QC is damn near non-existent on them
Blue = consumer level drives with about the same QC as Green so crap
Red = Drives made specifically for NAS storage, haven't bothered to look into what different about them and have limited experience with them
Blacks = Once you go black you don't go back. Nothing but praise for them myself.
I also hear they have a purple drive but I have no knowledge about it other than that.
I have a bunch of KVM virtualization servers on pc hardware running 4 mirrors each, comprised of one hitachi drive and one seagate drive. I havent got the details on them at hand (cant be bothered to fire up the VPN, log on, tjek the drives...... to much like work on a friday night :-) ) they are a couple of years old (1 TB drives). My observation:
Seagate drives are failing like crazy, hitachis just keep on running! Sad about that really, hopefully WD puts their acquisition to good use!
Biting the hand that feeds IT © 1998–2019