Can you quantify that "extra strain" as I can only see two potential problems and that is the head loading/unloading cycles and thermal shock from going from cold to hot and back again. The first is not a problem as even consumer PC's have at least 400K to 600K head unload cycles so daily shutdown will only account for 365 of these a year so no problem there until you get aggressive APM spinning down idle hard drives in some cases in just 8 seconds so some people have racked up hundreds of these cycles in a matter of hours especially in external hard drives, enterprise hard drives usually have more unload cycles available than consumer drives even though they don't need them. Secondly I haven't seen any statistics where tests are done on hard drives which are say fired up and run for maybe half an hour to warm up to their working temperature and then turned off for maybe half an hour to cool off and this cycle repeated over and over until failure occurs so I presume this is a non-issue either unless you can point me to some research somewhere.
14 posts • joined 22 Sep 2013
Re: why didn't WD/HGST put a jumper in
I tend to agree back when Seagate announced that they were ceasing production of their 5400 RPM green drives and two reasons I remember they quoted at the time was that the faster drives they had available only used something like 0.5 watts more power which they considered insignificant and also there were too many SKU's, naturally after a month or two's absence perhaps then magically they suddenly had 5400 RPM "NAS" drives available for sale in quantity and of course at a much higher price.
Depends on how they increase the storage size whether by increasing the track density or the DPI and if say you assume the drive will be 4 times bigger by doubling both then it will still only take twice as long to access all of it as you will get double the data in each rotation of the platter so ZFS Raid-Z3 will probably suffice and it doesn't even matter if they are shingled because even Solaris introduced sequential re-silver I think in version 11.2 in 2014. If you don't use the raid array whilst its re-silvering you're hardly working the actuator heads as they will be gently clicking over from one track to the next and the fluid bearings won't wear out and also being enterprise drives they will be adequately cooled so no problems there either. I've sequentially re-silvered an idle ten drive Raid-Z2 array of 3TB Toshiba DT01ACA300's in 7 hours with an I7-4820K CPU processing 1 GB's and my other array in another PC with an E5-2670 v1 CPU does 1.3 GB's and in enterprise systems with more grunt they should go even faster, so yes it could take days but so what? with ZFS the procedure should eventually complete successfully no matter how long it actually takes.
Yes, cassette tapes are good for this reason as firstly they are analogue and the special machines they should be using also record alongside the audio track some special tone(s) so that if you cut and paste the tape or edit out sections of it this is detectable, but to answer your question if you use the ZFS filesystem you can snapshot the data on the hard drive so for example ransomware thinks its encrypting your data but all its doing is writing fresh data whereas the source data is in a read-only state and immutable, also if you record to LTO tapes you can also get a special variant WORM (write once read many) cartridge that could conceivable satisfy this requirement.
No, because "uses" as you put it is 200 times to fill up a tape which for LTO-5 according to Wikipedia here https://en.wikipedia.org/wiki/Linear_Tape-Open requires "80 end to end passes to fill up a tape" and so your ACTUAL TOTAL "Expected tape durability, end-to-end passes" is 16000 (80 times 200) and not the 200 you quoted.
However, since the total size is 1.5TB this implies you could read 18.75 GB for one pass (1.5 TB / 80) so say for the entire data access operation you want to read just one (and only one) file (e.g. a movie) even up to this 18.75 GB size it could conceivably be contained on one pass but realistically most likely would be split over two and so you could quite comfortably do this procedure about 8000 times. Anything much smaller would most likely be readable with one pass only and of course I'm presuming the data on the tapes would be contiguous but that would be a reasonable assumption to make as the backup software would most likely be writing the files sequentially. Naturally I'd make a further assumption that each of the 4 bands and the 20 wraps per band get roughly the same amount of access because otherwise yes each individual wrap might only be good for 200 passes before the tape is worn out in that spot and the drive/software offlines the cartridge permanently due to "too many hardware errors" or whatever.
Tape drives themselves have a MTBF of 250K - 1M hours or so and their tape load/unload cycles are also huge so I don't see them failing early for this reason either, but who knows the tape holding the header pin might snap off after say 1000? load/unload cycles but since its reinforced around that area then 8000-16000 cycles of single file accesses <= 18.75GB in size might still be quite reasonable for one tape cartridge.
That's my 2 cents worth but someone more knowledgeable who actually uses LTO extensively may have a different opinion.
Yes, but modern hard drives that use 5-10 watts need around 20-25 watts to spin up for only a short period of time and most of that extra is the 12 volt feed so some reasonably large capacitors on that line could also assist and this only applies to consumer drives and setups whereas server grade gear such as SAS drives and raid cards/HBA's have things like PIUS (power up in standby) where to avoid such power surges the raid card spins up the drives one at a time, also if they do start up a batch of say 10-20 drives in PIUS mode you may only need a couple in that batch to actually start up and work and the others can stay in powered standby and also be mechanically idle until you shut that batch down and so that will save on head loading/unloading cycles for the unused drives.
Re: Thanks to the recording industry...
Perhaps you might want to consider storing your data using LTO drives and accessing the data on the tapes using LTFS which appear to be completely free from the tax you mentioned according to how I read this document http://www.copiefrance.fr/files/Tariffs_ENG_2017.pdf.
Re: Poor tape, gets no respect
I agree and from what I've read it appears datacentres with tape libraries with LTO-X wait until LTO-(X+2) comes out about 4-5 years later and then they spend a couple of months moving thousands of tapes over to the new ones needing on average about a fifth in number as the LTO-(X+2) drives can still read the LTO-X tapes. They can then finally get rid of the old perfectly good drives which are still reasonably good value to other people and the one main advantage of the new hardware is if they do actually need to recover data then they have much faster drive speeds with which to do so. I expect to see a lot of this happen again when LTO-8 comes out presumably somewhere in the expected October-January 2018 timeframe. I'm not sure what they do with the old tapes but either they sell them off or they just keep them as they are because they are a perfectly good backup of the original data for another 4-5 years until LTO-(X+4) comes out and the process repeats again.
Re: real problem - they don't want to pay telstra etc
I doubt that's the case because if your read their explanation here http://www.msn.com/en-au/news/techandscience/foxtel-reveals-source-and-scale-of-the-glitches-that-crashed-game-of-thrones/ar-BBEEZkR?li=AA4Zor&ocid=iehp where they say 'The company claimed the issue resided in its identity management (IDM) system' and also this here 'Ordinarily, the IDM handles around 5000 requests a day, the company said. But on Monday, it "was hit with 70,000 transactions in just a few hours".' how much bandwidth is used for authenticating credentials? I presume similar to online bank logins (several tens to a couple of hundred kilobytes perhaps) so if you take 'several hours' as say three hours (10800 seconds) then dividing this into 70,000 transactions gives about 6.5 transactions per second so hardly bandwidth intensive. I'd say its probably more of a case of slow implementation of a database server if anything probably running on some Raid 5 array of spinning rust with consequently low IOPS. I presume once people successfully logged on and got authenticated they had no problems with the actual downloads of the video stream which would be several orders higher in magnitude as far as bandwidth goes because there weren't any complaints in that area.
Re: I have a new 1PB+ storage option.
No, not more compressible but don't forget that even though the algorithms are fairly simple like that used for say NTFS compression that was introduced in Windows NT 3.51 it could easily reduce the size by around 50% for easily compressible data (whereas WinZip or WinRAR would shrink it down to say an eighth of the original size) .
Newer tape drives will have better algorithms and even if they don't they will most likely have larger memory buffers and work on larger chunks of data and something like LZW compression even back then would have gotten better results if you used a larger workspace going from a 128 KB buffer to say a 1 MB one and the last time I checked these drives may have like one GB of on-board RAM so its not inconceivable that they can get this sort of compression. Obviously if they get fed like Mpeg-2 data or random numbers then compression will be zero.
I know some professional people that would add in the charge to the customer for the four hours driving on top of the work they do and when their customers complain about that quote they are bluntly told beforehand "take it or leave it" but since they are highly skilled and very good at their job they still invariably get the work order as its still cheaper getting it done right compared to previous contractors that maybe have right royally stuffed things up and then cost them a lot of money to rectify.
"drone operators work 12-hour shifts five or six days a week" As if this isn't bad enough regarding total hours worked (60/72) per week, the article linked to also stated that they were "sapped by alternating day and night shifts " which would really do wonders for your health as you could never establish a consistent sleep cycle. I'm actually more surprised at why people would even undertake such a job under those conditions in the first place.
Re: First sort out frame rates
I agree as interlacing was a brilliant idea as for still images it showed full resolution and for moving images it was updated 50 times a second (analog PAL). I'm not familiar at all with the method of broadcasting for digital TV but if you consider Mpeg-2 P-frames I don't see a problem (as a concept) of replacing each full frame (assuming each frame was already a P-frame) with two half frames (giving 100 Hz of updates) or even 4 quarter frames giving 200 Hz of updates. Alternate options conceptually could be possible as well like say simulcasting on another effectively available channel intervening frames e.g. frames 1,3,5,7... on the main channel and frames 2,4,6,8.... on another one and if your TV is sophisticated enough combine the two together to double the Hz. From a practical perspective nothing like this will probably eventuate however.
As far as I understand it 100Hz and 200Hz TV's simply interpolate between each of the 50p frames and display intermediate results which as far as I can tell works reasonably well so that's another reason nobody will probably be bothering to do anything about this issue. We might have better luck if more films are created with double the normal picture rate (e.g. The Hobbit, Avatar 2+) so when 4K broadcasting takes off they may well consider this and build in this higher refresh capability up front. We can only hope!