Obviously
"What vast digital media repository could possibly need 64 TERABYTES?"
The average college student's downloaded pron collection.
Pay $7,000 and you could have a 64TB desktop storage pool connected to your Mac workstation. Its a lot of money, but 64TB is a hellacious amount of storage. Say it costs $1.09/GB, which sounds sort of affordable. Who makes such a box? It's an HGST G-SPEED Studio XL box with eight 8TB helium-filled He8, 7,200rpm drives mounted …
I'm one of those people, for my home data storage.
Power cuts/surges are so rare, at least here in the first world, that the cost of acquiring and maintaining a UPS (don't forget to replace the battery every so often!) is greater than the cost of having some backups and occasionally replacing a disk. In fact, my upgrade cycle for replacing disks because I've run out of space is far faster than the time between power supply problems, so if something does go wrong I'll just upgrade sooner.
UPSes matter more at work only because the downtime caused by an outage is more costly.
Clearly you do not live in the parts of the so-called first world where violent electrical storms are a matter of course. Some of us do.
Try monsoon conditions - nothing trips an earth leak detector quicker. I personally don't have the problem as I work with 2.5" external drives which are protected by the laptop battery (I'll be well shut down before it becomes an issue) but I've seen plenty of externals go down badly. I guess it depends on the file system in use, if you have journalling features you probably won't suffer much.
Raid arrays seem in practice to be far more vulnerable to corruption than single drives (internal or external). Higher end raid cards have their own battery built into the card to allow them enough time to write or their buffer iirc. Its generally wise to use a ups even if it's just to give you a few minutes to turn off the system gracefully anyway and they are cheap these days. Not putting a $50 UPS on $7000 NAS seems a little silly to me, especially if the data on it is probably even more valuable but that's only from my perspective (or power is off here more than a tarts knickers).
It's only scary if you view a 3.5" disk as something that is unlikely to fail, or use a file system that can be corrupted by unexpected power outages. Once you take in to account that hard drives fail, and that power supplies go out, you can easily build resilient systems on cheap hardware.
He-2 isn't long lasting either. It's a smaller atom than H since H wants to put its arm around another H to make H2. He is a real bastard to seal up and it is used to as a leak detection gas when testing systems that need to be as gas tight as possible.
I'm going to hold out on investing in any drives that need to be run in a He atmosphere until there is some feedback on how well they hold up.
To be fair virtually nobody works in uncompressed raw 4k but with 5&6k is a reality and 8k is imminent and that's going to drive bit rates even higher. Plus you often have multiple cameras shooting the same scene plus additional takes. This is squarely aimed at independent film makers, $7000 is the price of a mid range lens, is probably a good deal for someone who just wants something that plugs in and works.
Using the bullshit scheme that only HD manufacturers believe in $7k for 64000 GB is $0.109/GB
Using sensible numbers it is $0.117/GiB
Since anyone sane would use RAID5 at the very least the it goes up to $0.134/GiB minimum unformatted.
After formatting it'd be slightly more and usable space would be a little over 50TiB.
"What vast digital media repository could possibly need 64 TERABYTES?"
I'm currently running my mediaserver with 12TB of storage, and with only 2TB of space left over the 4 drives. So I'll certainly be looking to replace the 3TB drives with at least 4TB ones soon... I'm slowly replacing all of my DVD rips with Bluray ones... and it's hard having to compress files due to storage concerns. At the moment the best films are done at between 10-20GB and the ones that I don't really care about as much are done at 4-8GB.
Then on top of that there are the TV shows that I've not even started on yet and want on the server.
Having just built a new mediaserver 6 months ago, which should see me through the next 5yrs (last one did me 4yrs and would still be going strong if I hadn't placed it precariously on the edge of the sofa when moving stuff around and cleaning... so that it fell to the floor upside down wrenching the heatsink free which duly smashed into the ram damaging both a stick of ram and the dimm slot... necessitating the upgrade)... I'm hoping that by the time I do want to rebuild again... I can simply buy a few 6-8TB drives at reasonable prices. My storage needs are increasing by around 4-5TB a year at the moment, so in 5yrs I'll be needing close to 30TB I reckon.... and a raid5 array would be ideal to cover loss of data.
and no... I am nether a student, nor is it all porn... I have a seperate 500gb drive just for the porn. :P
IHL,
I raise a pint for you!
Same-ish here [ext smut drive excluded], one oldish desktop tower Pc + Ubuntu server +myth tv + small SSD for system, 4 x 2tb drives with RAID = very useful media server.
We have been using PlayTV on a PS3 to record all top TV for the last six years or so.
Now need to expand it, so some form of external multi-drive enclosure is needed, plus 4 x 4tb drives.
Then I can get all of our DVDs onto it as well, then to the loft they go!
Funnily enough I also have a MacPro with Final Cut Pro, it uses A LOT OF SPACE!!!
Cheers,
j
It's not a contest ... but I got fed up with cramming servers full of hard disks for my media server, so got a couple of 16 bay enclosures 2nd hand ($100 each) and a decent HBA (£80) from ebay, fitted them out with huge silent 200mm fans and silent PSUs and they sit inside an Ikea LACK rack. Currently I've got 8 * 3 TB in raidz, and 8 * 1.5 TB in raidz3 (old disks that I don't trust much..) for a total capacity of about 26 TB formatted capacity, plus a pair of SSDs for root/VMs/DBs/work and another as a read cache for the data arrays.
> and, who knows, object storage with its non-RAID data recovery features like erasure coding
RAID is an implementation of erasure coding.
> may make a desktop entry, either as the storage layer underlying the top-level file access or as a direct offering for apps with built-in object storage access.
What compelling reason would there be to putting an object filesystem layer in here for desktop use? It won't speed up rebuilds.
There are a few ways to speed up RAID rebuilds:
* Smaller raid sets (though at a cost of capacity)
* Efficiently rebuilding by removing as many bottlenecks as possible (more drives, controllers, SAS lanes etc) - this is what most people think when they're talking about erasure coding but equally applies to single arrays.
* Only rebuild data, not empty space. This is also what most people thinking about erasure coding schemes are referring to, but while that's impressive on an empty array it's less so when the array is 100% and all 'enterprise' arrays should do that anyway, if they own/understand the filesystem and hence know what is empty space.
To capacity. The bit error rates on these drives should give you pause if you aren't using at least dual parity. 64TB is 5.12e14 bits... close to the BER 1e15/TB the drives are rated for. Smaller RAID sets help - e.g. 5+0 is better than 5 for this case, but 6 is better than 5+0 for the same number of drives if resilience is needed.
Only reason to have RAID-5 on this would be for scratch space just so that a single bit error didn't cause a long running multi-hour job to fail. RAID-0 just isn't any use at all with these drives.
64Tb is not that much if you edit video and do visual effects.
What is scary is that IBM-Hitachi was a marriage made in heaven but since the transition to 'HGST' they answer to the shareholders and you might as well save your data on toilet paper covered in iron filings. I hate to say it but Western Digital drives are the best now.
Well done, you've completely ignored the enclosure cost per bay, which includes many things and varies depending on how many bays you require - HBA controller to support that many drives, enclosures to plug in to HBA, ((140*3)/16)U rack space for enclosures, increased PDU demands....
For a system with 140 drives, you would need to have triple redundancy, because you also need to take in to account a single enclosure or HBA dropping away temporarily, by increasing redundancy, distributing disk arrays across enclosures and going multipath to the disks.
Then, after a year of use, look forward to replacing a hard drive at least every other month.
This post has been deleted by its author
Tom38: Thanks for reading my post so carefully. I'm not an IT guy (obviously), just someone with a Seagate 4TB and a Toshiba 3TB both packed to the gills with dvd back-ups, like an earlier commenter. If you had chosen to do the math you would have realized that New Egg can sell you 64TB of external storage for $140. x 16 = $2240. I wasn't suggesting that was practical for pro setup, only that for consumers like myself who are not IT pros, and already have a few multi-TB drives bulging with dvd backups, a $7,000 enclosure is not very appealing. By the way, where you came up with 140 drives I have no idea, but thanks for the reply.