Feeds

back to article Tape backup could be binned soon

Backing up to tape in an autoloader or small library could be heading for the graveyard if an Imation deal with BDT comes good. BDT is a white-box manufacturer of tape autoloaders and libraries, devices with up to four tape drives and 96 slots for cartridges. They are used in small and medium businesses (SMBs) to back up data …

COMMENTS

This topic is closed for new posts.
FAIL

I suggest a head-to-head test..

I suggest the following head-to-head test:

-Backup the same data to one RDX and one LTO cartridge.

-Bring both cartridges to a second floor window over a concrete pavement.

-Drop both cartridges.

-Verify that both are readable.

//Svein

5
0
Joke

Agreed

Add 8-track to the competition while you're at it

0
0
Bronze badge
Pint

I think I'll fast forward 5 years ...

... and drop my 1TB flash memory stick off and see which wins.

1
0

You forgot one...

Will it blend?

0
0
FAIL

5 years..

is that before or after you have had 5 new usb flash drives as they keep on dying or get a static shock. usb flash is not longterm storage.

0
0
Anonymous Coward

Form factor

I find it surprising that none of the major HDD manufacturers have considered a return to the 5.25" full height form factor. It would probably need a reduced spin speed and maybe the areal

density slightly reduced because of increased head angles, but the larger number of platters and vast increase in available surface should still allow capacities around 50TB with better transfer rates than tape. They might be too slow for use as general purpose drives, but just the job for backup and media libraries.

@Svein - stupid example. Why not throw yourself out of a second floor window and see how well you work afterwards?

0
0
FAIL

Not so stupid

Lots of application for tape backup require the robustness of tape, particularly in the offshore seismic data recording industry. Admittedly the OP isn't referring to replacing tapes in such an environment, but a high degree of physical robustness is required in many environments outside the office or data centre.

0
0
Pint

The Gravity Challenge Is...

...brought up by Sony salesmen, as it releates to archival television content.

My personal thoughts: Store on redundant hard drives, at least one of which is in another city.

211 is marketed as "high gravity" so another data safety test might involve drunken operators at the console.

0
0

DAT/LTO/DISK

I can't believe HP still sell DAT72 autoloaders with a whopping 320GB* capacity when for the same price (give or take) you can buy a single LTO4 drive with more than twice the capacity (800GB*). Why use 10 tapes when 1 will do?

I don't expect these disk cartridges to be as cheap as an LTO4 tape, though if they are and the library is cheap then it may be a viable alternative for small businesses, if they are shown to be robust enough that is.

* I refer to the real or native capacity as only marketing people live in a world where 2:1 compression is a realistic expectation, and a yard stick by which to measure storage capacity.

0
0
Linux

Good idea = use HDDs instead of tapes

Now that's an interesting idea - use cheap notebook drives instead of tape cartridges. Maybe the tape cartridge survives a few more G of a mechanical shock, but owing to its internal magnetic heads and bearings (dust-tight environment), it should OTOH survive many more overwrites, longer hours of continuous operation. Which may also compensate the highter cost of disk drives vs tape cartridges (and maybe tape drives).

Why use some mechanical or electrical multiplexing of the drive lanes? For a smaller number of drives=cartridges, you can just as well use an expander, it may be cheaper than a robot + tape drives. If you have the know-how, you can build your own virtual library along that principle - buy some cheap hot-swappable case (such as SuperMicro SC216 or SC417, or SC847 for 3.5" drives) and build a virtual library on top of that. It's not much of a problem to shut down or spin up a drive in software, or keep it in some shallower stand-by state, and even to watch out for hot-swaps. An important part remains to be solved though: the management software candy on top, and some backup client software. Something to keep track of your "tape-style disk cartridges" and maybe provide a virtual tape interface on demand to the clients. Someone in the open-source camp with plenty of free time could as well start coding something like that :-) Okay, once you run out of drive bays and you need a robot, it's back to the library makers...

There are potential fields of application where it's intriguing to deploy a big robotised tape library, with several tape drives, to perform long-term archival of some data - such as from continuous video surveillance systems (big brother kind of thing) or medical Xray/CT data. I've been told by practitioners who have attempted that kind of thing, that tapes have downsides in this application. The tape drives wear out much too fast - not up to 24x7 continuous operation in video systems. And in the medical systems, the users tend to get addicted to the possibility of having a patient's past history always at their fingertips, so that the library again just keeps huming all the time and the users are disappointed about the access time ("hey it's all in the computer somewhere anyway, so why does it take so long"). Plus, in the medical imaging technology, the data volume just EXPLODES every time a new machine is installed in the hospital (having a higher resolution, being 3D rather than 2D etc). And, somewhere inbetween all that, the tape cartridges are not very reliable after all... It's a crazy world...

0
0

notes

actually, disk can survive a larger shock than tape. metal enclosure might get dented, but when not spinning disks are good to 300Gs. Tapes crack easy, but more importantly spindle shift is a huge issue is the tape lands flat side down. You;re dead on with the environmental weaknesses of tape though.

With disk you get 2 systems, not 1. Tape you archive to and then remove. Disk can be both an online AND offline recovery system. Backups run to disk, are cataloged, deduped, and compressed, but STAY on the internal array of the DR system. This is low tier cheap storage, a simple collection of JBODs. Backups can be rsynced from there to removable slots, and those can even be a RAID 5 disk set. Each job is a file, not a virtual tape, so the number of disks in the archive only needs to be as large as the total of compressed storage, no complexity. The backup array is simple flat storage. Client backups are stored in sort-of TAR balls, but the contents of them is managed by a database engine and deduplication algorithms. It;s a mater/inremental set, not a master and tons of differentials, even though a differential is technically performed each day. The DB knows the time indexes and job numbers, so even though you only need 2 files to completely restore a server, you can also do point-in-time recovery to any step inbetween the master and last backup. Advanced real-time disk syncing is also possible, increasing the snapshot frequency. Yes, all the magic is in the engine, but the data ion disk is small, simple, and could use portable DB exports to easily be imported into a clean system later in a site disaster (or synced to a WAN based mirror system). Also, a disk array can parallel write as many backups as you have IOPS to spare, where a tape system is limited by the number of read/write head drives. (and those heads are expensive!)

A Company called Unitredns has been doing this for years... Their stuff is amazing!

When you start talking about massive, petabyte, datasets, we're not talking backup anymore, we're talking about replication, journaling, and archive. You don't do master backups of datasets that big, you can't. But, for any systems with sub 10TB or so volumes, nightly and continuous backups are still an option. Anyone with less than 100TB of data could use a small array of appliances and get amazing backup performance and near-instant file/folder recovery and file/folder history search.

0
0
Silver badge
Boffin

@Michael C

"actually, disk can survive a larger shock than tape. metal enclosure might get dented, but when not spinning disks are good to 300Gs."

Tell that to me, I lost an 80Gb HDD because it fell from 1m height. Not completely unreadable, but the damaged sectors made it impossible for me to recover most of my stuff. The thing is "rated" at 50Gs ... so I'm not particularly impressed with those ratings.

I've had DDS4 tapes fall from the same height, they're still working.

0
0
Flame

Yes, it's a good idea

Until you use Hitachi DeskStar drives and all of the backups arrive at the off-site storage location dead because the truck had hit a bump in the road.

I believe I read such a complaint from this exact forum a long time ago.

0
0

What about DVD-R?

I would think 4.37GB (power of 2, not the 4.7 power of 10) per write-once DVD-R would be the champion at archival storage. If writing isn't fast enough then parallel the writes. The media is inexpensive. Only question is longevity. But longevity can't be that important if one is considering archival hard disk drives.

0
0

Parallel writes are a major issue

First off, the number of DVDs necessary to meet the write speed of a single HDD or tape is massive, on order of dozens. Most companies need 4-16 tapes of a few dozen disks writing concurrently to handle a nightly backup window as it is. The power, heat, and space requirements of a massive disk array is simply not feasible.

Then you have to deal with getting media in and out of those drives, appropriately labeled (inline DVD printers), and stored in removable cartridges. Keep in mind a stripe of 16 DVDs would have to be ejected pretty much concurrently, and a new disk loaded into each of the 16 drives before the backups could continue, that's a lot of work for a robot, or a lot of robots.

Then there's data validation... Tapes have 3 heads: read, write, read. As a bit is put on tape as it fies by, it is instantly read the the head next to it. DVD does not work that way. Have to write the whole disk then validate it. This is more complicated when data is spread bit stream across 16 platters that have to be in sync, and parity bits have to be taken into account.

No, its been researched to death. it just isn't viable. The media is dirt cheap, but complicated and prone to data loss. The hardware is massive, and expensive in terms of both physical and power costs. Even 50GB BDR media is not viable, and is used most often for single system archives, or in the medical industry for MRI image storage that is never centrally managed on a server but is too bid for a singe CD/DVD.

As for longevity, CD/DVD media was designed for streaming playback of digitally converted analog data sets (music, video). DVD can hold a music file with no "perceptible" errors for 50-100 years, but bit failure on the disk level is evident after just 60-90 days. In binary data, single bit lost on a disk can be a major issue, so parity bits have to be used at the block level, and they have to be able to accommodate multiple bit failure per block. Hard disks, not being subject to environmental factors like light, humidity, dust, and physical contact, and which are much better able to deal with heat fluctuations, have a better chance of surviving long term.

DVDs archived just 10 years ago have proven unreadable. Many in the national archive have been lost to bacteria corrupting the inner metal layers. Its a devent low cost portable media for short term data transfer (mailing a large file set), but it is a very poor DR media.

0
0
Go

Tape should have been binned a long time ago...

Speaking as an administrator of a smaller (~30 computer) network, I can say that the reliability of tape (especially that of Super DLT) left a lot to the imagination. I've had name brand tapes that should have been high quality and were dead in the package, or that developed faults down the road. And I've also had to extricate more than one tape from the drive when it refused to come out.

Now I'm seeing that long term backups have become riddled with read errors.

I gave up on the SDLT system entirely and moved to external Firewire hard drives. The difference in terms of backup speed and reliability is stunning. I've never looked back for a moment.

What's really ironic is that I have a stack of QIC-80 tapes at home from *many* years ago. I've had occasion to need some of what is stored on them, and *every* time they have come through, although I did have to mount a new tire in the drive when the old one turned to glue. That's not too bad for longevity.

2
0

new = less reliable

This has applied to technology since the early 1800s. The faster, more advanced, etc the tech, the less reliable it is.

Old tapes were both shorter and had fewer tracks per tape. They also were thicker, and took more energy (and time) to write each bit. Doing so meant it also had a longer magnetic memory (more grains of metal per bit are more resilient to minor environmental effects). Old tapes might have had 16 or 32 tracks. New tapes can have over 400 in the same space, are on thinner material, and has a tiny fraction of the magnetic material per bit. They're also required to be MUCH more strictly aligned to the heads and even a fraction of a degree shift can be a major issue. I;ve seen LTO drives read a tape fine ,but put it in another physical drive, and it looks blank to that drive, back in the first and it can read it.

Tape has abysmal reliability, especially when moved from site to site and device to device. it needs to be binned.

Unitrends figured this out near a decade ago, and was the first to market with a D2D appliance. their tech is far superior, and they did not hold on to false logical structures like virtual tapes. A backup is a file. Data in that file is managed by a database which can prune and compress it using Deduplication algorithms. Files are stored on arrays (which can also be archived as an array of disks for reliability). Online recovery takes seconds. Even from archive portable database files are quickly searched, and random read heads find data very fast, typically while a tape robot is trill getting the header read off the tape... Disks are also multi-threaded and limited only by the IOPS potential of the pool. Some tapes can packet write for faster backups, but that just means data recovery is slower as a single file may be fragmented across an entire tape set, and tapes can only write as fast as you have tape heads to mount tapes in, and those costs thousands each and jukeboxes have SCSI maximums for number of tapes and slow robot arms.

Tape died 10 years ago, IT departments just have not realized it yet...

1
1
Bronze badge

re: Michael C

Reliability -

Everything you say about the evolution of tape is true. But it also can be said of spinning disks as well. The data is tightly packed there as well. LTO has been very reliable with me since Gen1. And call me heretic but QIC was very reliable as well, last year we still had a few 125/525MB drives working daily backups on tapes that had had numerous reinsertions and rewrites in dirty environments (back rooms in fast food restaurants...)

RDX is basically good idea and simple to use, and if you only need a couple of cartridges, it's fine. But if you need the speed or capacity, the RDX is no go. For the high price of a cartridge you could actually pop a same size hot swap SATA hdd into the server and backup to it. (I wouldn't recommend this though!)

On the low end systems tape still has its place. The Unitrends D2D systems you are recommending here (for the third time now!) are in a different league and I agree that if you need to move 10 or more TB's replication or some other technologies could suit better.

0
0
Alert

Re: new = less reliable => disk drives :-)

The recentmost disk drives on the market, at any given time, are bleeding edge, and tend to have lower reliability. Highest possible areal data density, four double sided platters, quite a bit of heat produced... In the recent history, especially around 1 TB (3.5") the vendors were pulling all their best of cunning tricks to cover up physical defects at runtime, to compensate for poor reliability of the platter surface.

The most reliable drives tend to be the lowest-capacity model still being manufactured at any given time.

As for long-term durability (years, up to a decade or more): on several occasions in the last few years, while diagnosing some RMA'ed drives, or rather drives long over warranty, I've noticed an interesting "syndrome" or phenomenon. During an initial full-surface sequential reading test, the drive reports a couple of bad sectors, scattered across the surface of the drive. On repeated sequential reading tests, it's always the same sectors. Next, I tend to write the drive with all zeroes - to test if it fails when writing as well. The write test gets completed just fine. Next, I try another sequential read - lo and behold, the drive reads just fine! Even upon many repeated sequential readings, e.g. looping the full-surface read test for a week, the drive acts just fine.

My hypothesis: the payload data tend to wear off in the sectors. After years of sitting on the platter surface, the recording fades out - difficult for me to say if this is due to natural properties of the material, or due to writing/deflection magnetic field activity all around during long runtime hours. Note that this fading out does not impact the track alignment marks, comprising the skeleton of the drive's low-level format - those are made on bare platters by the disk vendor using a special machine - those tracking marks are much more durable, and "track not found" is a much more serious error than "error reading this sector".

This might have an interesting implication. To keep your data safe, you may as well want to "refresh" the recording every year or so. Just read the whole drive sector by sector, and write back the sector contents immediately to the same place, as you go along. It could keep the recording alive for many more years. As far as I know, noone does this. RAID firmwares can check the surface periodically in a read-only fashion, looking for sectors that have already failed - but as far as I know, noone has ever tried *refreshing* the recording on the platters just in case.

0
0
Silver badge
Thumb Down

Weak!

"...reading and writing [tape] is slow, compared to disk"

Many commentators assume Moore's law does not apply to tape. Typical speed of 15k enterprise disk: 1 Gbit/s. Typical speed of LTO5 tape: 1 Gbit/s. Location of bottleneck in enterprise backups: the network, always.

The following is true now and for the foreseeable:

1. Tape is much cheaper than disk in £/Tb

2. Disks can't be easily sent off site.

3. Tape remains the offline daddy.

4. Disk remains is the nearline bitch.

1
0
Anonymous Coward

eh?

1. no argument. Being retired anyway, I don't even know!

2. Yes they can!

3. and 4. What does that mean?

0
0
Thumb Down

Yet another bit of history repeating

Oops iOmega did it again. RDX looks pretty much like iOmega REV from a few years back just "industrialised" and "standardised" by somebody else.

I have been using iOmega REV for nearly 5 years now. It is a disk cartridge with most of the controller _INSIDE_ as well as drive electronics, available in single slot loader and autoloader format. The same thing RDX is proclaiming as innovation. It is a pity that iOmega as usually did not try to standardise it, license it to everyone else and actually make it an industry standard (it is not the first time - they did the same mistake with Zip).

So now we are served what is essentially a replica of iOmega's system (most likely with some workarounds for iOmega patents). It is presented as the next hottest thing after freshly baked bread for the tape fans.

Yawn... Big Yawn...

0
0

No robot required?

Your suggestion that no robot is required is based on current usage patterns.

"You won't need a robot because every device in the drive will be connected."

But, with more "slots" available it probably makes sense to have a pile of cartridges ready to use, in a hopper, and an output hopper.

Then the robot can swap new RDX cartridge into drive and when it's written dump it in the hopper. Maybe even allow it to push daily rewritable carts back in the input hopper.

I can't see anyone producing a library that I can afford that allows me to connect my entire cartridge stock for the next year's worth of backups at the same time.

0
0

Tape's demise is overstated once more ..

.. it just lives in a different space. If you're using it only for on-site backup then - fair enough - get with the programme.

If you need to off-site/third-or-more-site, archive point-in-time copies - even where you have remote replication - then it's very much a contender.

I'm sure there are many other readers who can attest to good tape reliability records to balance the experiences posted here. The correspondent's experience with SuperDLT is unfortunate -- but it'll be no surprise to users of DLTx technology that it fell out of favour.

Tape is not just for Luddites..

.. at least for the time being.

0
0
Flame

@weak

Errr lets analyse your points shall we:

1. Tape is much cheaper than disk in £/Tb

Disagree. A tape might have a better $/TB but it fails on cost of ownership. Tapes require tape drives and tape libraries. Further you can get more efficiency from backup products that employ disk storage strategies such a de-duplication.

2. Disks can't be easily sent off site.

Well thats the WHOLE point of RDX????

3. Tape remains the offline daddy.

See 2.

4. Disk remains is the nearline bitch.

See 2.

RDX brings the benefit of cold storage (ie now power requirenment when offline) and removable media that tape has but with the benefit of fast random access.

For over 10 years people been saying tape is dead but until RDX they hadnt addressed the need for cold and removable media. Now I actually think that tape as we know it might come to an end.

0
0
This topic is closed for new posts.