Rsync is 15 years old.
Funny how old ideas are always 'new' to someone.
Symantec says backup is a multi-point product mess, with big data blowing backup-window timing out of the water, and so it has souped up both BackupExec and NetBackup to cover more backup and restore use cases. The sexy news – well, as sexy as backup news can be – is that the latest release of NetBackup is said to be 100 times …
I really hope you're trolling, NetBackup is way more complex than rsync (or Robocopy, if you're a Windows type).
Anyway, these ideas are just the latest incarnation of a development that spans from 1992 (IIRC) to now. Differential backups have been there since day one, client side deduplicated backups put into a real time synthetic backup, which is then replicated to a remote datacentre at the block level are pretty new technology, however you look at it.
Yes, NetBackup is more complex. rsync would only be part of the Big Picture. You still wouldn't be able to do client-side dedup (disk/processor intensive), but you'd at least get a synthetic backup with inline compression during transport. rsnapshot would help you manage backup snapshots, similar to NetApp, but at a far worse disadvantage. Then there's the matter of CPIOing it to tape or the like, which NetApp can handle in spades (as long as you like managed tape sets). The advantage your rsync/rsnapshot has is cost, at the expense of "lets hope it keeps working." Granted, NetApp becomes a black box appliance as well, but at least it has commercial support....
"The advantage your rsync/rsnapshot has is cost, at the expense of "lets hope it keeps working"
This is a classic problem - I've been working in data protection for about 12 years and this keeps coming up. The use of scripted utility based solutions - like rsync - is a false economy. What inevitably happens is that the person who wrote the scripts leaves the company and you can't get someone to replace them and "hit the ground running", because they have to spend months learning the bespoke solution. It's very much like the backup equivalent of someone in the call centre knocking up an excel macro to do a job and then the management finding that it's totally unusable and needs extra functionality coding into it, six months after they've left.
Other advantages that COTS (or FOSS) backup packages have is that there is multiple platform support - Typically Windows, Linux, UNIXes, VMS, iOS (OS/400), Tandem, Mac OS etc. These systems don't all have rsync and certainly aren't all scripted by the same language. These packages implement multi-tier backups architectures, they tend to have offsite replication and/or vaulting facilities, all of which need to be hacked together by scripts, if they're needed.
That 3.6 minute is probably based on 1 byte changing in the file, which is not a realistic use case.
Also, there's no mention to how much additional CPU load is put on the backup client during this "client-side deduplication". Nice way to crater a VMware environment.
Also, TSM 6.2 had these features over 2 years ago.
....their claim to speed up the backup by such a margin would appear to be merely a gloss of what is actually happening here. Their approach is to do a quick, incremental, backup of what has changed and present this as a 'full backup'. Woe betide the sysadmin who tries to do a restore from this backup when the original full backup repository, or any one of the other 'full backups' has any sort of failure at all - six months or six years down the track, how can you be absolutely failsafe certain that *all* of these pieces of the puzzle will be available and error free? That's why we do full backups in the first place! Calling a series of incrementals a full backup is a fools dream
What's happening is:
A client side deduplicated, differential backup is being taken. Once this backup is on the NetBackup media server, it is being stored on disk (locally attached/SAN attached, or OST appliance) the blocks are uniquely hashed and won't be deleted until all backups which reference them are expired. These blocks are used to create a synthetic full, by processing which versions of which files are in the current and subsequent backups. You can delete the initial full, but the actual data won't be removed until it's not longer required by the synthetic fulls. If you are moving your data onto tape from the disk, there is a physical full (non-deduped) backup made for each synthetically created full - again there is no chance that deleting a previous backup can compromise the subsequent synthetic backups.
This is a similar method to TSM's incremental forever, just at the block level.
NetBackup will typically use 1 CPU core per datastream of backup. You can, of course do the dedupe at the server side.
The handling of VMware in NetBackup is particularly good - you have a mount server, which can mount up the VMDK and back it up as if it is a snapshot of a filesystem, for NTFS, EXT3 and EXT4. This means that you can do proper file level backups and recoveries for VMware.
Sounds good, but (and its a big but) in our Experience over the last 6 years Backup Exec has hardly been a stellar product, and got steadily worse with each release, and we are not a massive enterprise - 6K users just over 10TB of data to backup.
We have needed hotfixes on top of patches just to get the damn thing to work reliably. Everytime we patch it fixes one issue and causes several more.
On features BE matches up with most of the other products out there, but in real world use it has been a right royal pain. Symantec would have a very very hard sell to keep this customer.
Another ludicrous claim from a vendor, which sounds very like the crap coming from numerous salesmen that thin provisioning WILL save you (insert random figure between 80 and 100%) space.
If you run for i in `find .`;do touch $i;done then maybe you'll get something like these figures.
By the way, I'm aware of the error in the above, but feel free to point it out anyway.
NetBackup has had incremental/differential backups since day one, what's new here is the method by which they are being taken. They are no-longer using modification dates or archive bits, rather they are using filesystem change logs, which is much, much faster. No, it's not the first package to do this, but it's new to NetBackup and it's still a pretty new technology.
This post has been deleted by its author
This post has been deleted by its author
Perhaps Symantec should open their eyes and look at what Veeam have been doing for the last few years - particularly their recnt v6 product - it's simply superb and stangely enough for a backup product - it actually works properly and scales with a minimum of fuss.
As awful as it is (and Thank Dog I don't admin the Windoze side!), our NBU instance rarely fails to b/u and restore when requested (runs on and backs up my *ix boxen just fine)
The Vendor-speak in the fine article leaves much to be desired, and I sure hope my boss doesn't read it and think "We could be doing that!"
Beer, because NBU drives me to drink (more)...
Thats fine if you are fully virtualised - for those who aren't then Veeam is just another product to manage whereas this does physical and virtual. The really neat thing about VRay is the backup is catalogued at the time of backup so single file restores are quick and easy (and from reading up on the new stuff single emails etc too) plus I can backup to tape. Veeam has a market but it just isn't flexible enough yet - I mean it needs something like backupexec to back it up - that makes me lol
Biting the hand that feeds IT © 1998–2019