23 posts • joined 20 Aug 2010
Is David Snowden related to Edward?
Re: 2007 hardware obsolete?
I have 3 "vintage" macs which are not capable of running 10.8+.
It's one debate to have about the decision to have 32-bit firmware on 64-bit machines, but Intel (whom y'all love to pieces) deserves some of the gong for that. It is wholly another debate about how long to support a discontinued piece of hardware.
The older Intel Macs support LION, and LION is still supported. So, it's not like the Snow Leopard folks are being given short shrift. If my car has bald tires and I continue to drive on them, it's not Oldsmobile's fault I'm in peril. I can buy new tires.
Look folks, at some point they have to stop supporting things, and I think N-2 is not bad. I'd rather the (sadly) finite resources be spent supporting a fair number of releases, and developing new ones, than to have them only plotting the future by the limitations of the past. If MS had dropped 16-bit support in XP, it'd have been smaller, faster, and more stable and have inconvenienced not that many people. Reg readers would have bitched about how Visicalc ran just fine all the way from DOS 2 through WinME, but it's because you're not getting laid enough.
You can't buy new tubes for your Hallicrafters.
You can't buy new batteries for your Noka 2001.
You can't buy a new engine for your Wright Flyer.
Move on, people. Buy Lion, install Linux, or buy a new computer. If you really must get Mavericks running on your polycarbonate MacBook, start writing the driver. Send me a copy, I need to upgrade.
PS: Just for shits, grins, and giggles, I ran the same HandBrake job on a MacPro2,1 (quad-Xeon 2.6) and a new 2013 MacBookAir (1.3Ghz i5) and they finished within 3 seconds of each other.
New shit > Old shit
V5000 is new, the v3500 was and is still China only.
Was this article typed on a phone?
I know I'm always worried about mall file performance, which good. Give him an A for efforst.
Speling iz harrd.
All this time I thought it was "Ceiling wax", and it made no sense...
This changes everything.
Oh... and store 'em in a faraday cage.
Also, since you seem to have a concern with magnetic media, make sure you wrap them in tinfoil and store them in a faraday cage...
Just FYI -- As a test one day, I ran a Western Digital caviar (perpendicular recording) HDD through a tape degaussing machine. It flolloped and danced around, but 10 minutes later, we could still read the data.
I wouldn't be too concerned about stray cosmic rays and solar flares and media. Things today are pretty damn robust.
FInally, the MO disks... are made from glass. They are waaaaaaay more (physically) fragile than tape media.
And did I mention in my previous post that the MO drive failure rate is extremely high?
You've got to ask yourself:
1. Will the media still be in commercial use in 15 years.
2. Will the drive interface still be around on Ebay? (good luck finding a PCIe floppy card for a QIC-40)
3. Were there enough manufactured to be able to find them on Ebay?
You can still find SCSI QIC-02 drives: http://www.ebay.com/itm/Wangtek-5099EG24-QIC-02-BLACK-Used-or-refurb-/181044517526?pt=LH_DefaultDomain_2&hash=item2a27181a96
Not so lucky on SCSI 8" floppy drives.
Optical is dead, dead, dead.
Way back, in the mists of time immemorial, magneto-optical was used for long-term archival storage. Many regulated companies used devices like the IBM 3995 optical library (with Sony drives). The appeal was:
1. An allegedly infinite lifetime for the media.
2. Random access VS linear access for tape (faster seek times).
3. Solid state/No power required/shelf stable.
Unfortunately, MO suffered from some major flaws.
1. Horrible storage density. (last generation I worked on was 2.6Gb media).
2. Single-sided media and drives (robots that have to flip over disks are more mechanically complex).
3. Very high cost for media and drives.
4. Drive failure rates.
5. Media failure rates (according to Sony, due to excessive mounts).
6. Very low performance (If I remember correctly in the 1-10MB/s range).
Evidently, the market decided that the cons outweighed the pros, because all of the 3995's I ever supported were replaced by LTO WORM and NetApp SnapLock.
For personal long-term archival storage, nothing beats tape. Old LTO3/4 drives are ubiquitous and readily available. Media is cheap, and it is relatively fast and very dense.
Fuji estimates that tape media can last 30 years. Since this is likely longer than the interface medium (SCSI, SAS, FC) will be available it would seem adequate.
( http://www.fujifilmusa.com/shared/bin/LTO_Data_Tape_Seminar_2012.pdf )
The industry standard/common sense method to insure against individual tape failure would is to write multiple tapes with the same data.
However, I would strongly recommend that you use a timeless format like TAR or CPIO. It'd be a shame to archive your precious home-made cat food recipes and be unable to restore them because Android for Desktops v107 can't run BackupExec.
Also, with LTO, keep in mind that there are rules for which drives can read/write which gen of media.(write -1 gen, read -2 gen). Since drives are really cheap on ebay, I'd recommend getting an extra drive, seal it in an anti-static bag and put it in the safe-deposit box with your tapes.
Agentless backup w/no CPU?
Unless you are storing all of your data on NAS (and running the backup from there), how would you perform an agentless backup that doesn't use host CPU?
As you stated, the big concern is long term retention. It is not uncommon to implement a new software, let's call it Tripoli Storage Manager, and then allow all the short term data to age-off from the old NerdBackup software. Using a partitioned tape library, you could then shrink the NBU partition until it only consists of a single head and just the relevant cart slots. You could also then just maintain a license for a single client and server. Not that big of a deal.
The other option is to use a conversion utility which utilizes the API of both titles and transcodes the data.
Finally, agent-based backups are becoming more not less popular. Many titles now perform (host) CPU intensive local deduplication to reduce LAN load and storage pool use.
Reading your article I get the impression your familiarity is with non-enterprise software like Backup Exec. The real kids play with both software and hardware that is more sophisticated.
The drive in the picture is a 3.5".
You can do better...
How about a story on OEM's using Itanium?
The definition of "dike" is:
dike 1 |dīk | noun
1 a long wall or embankment built to prevent flooding from the sea.
2 a ditch or watercourse.
verb (often as adj. diked)
provide (land) with a wall or embankment to prevent flooding.
dyke 1 |dīk| noun
Felled by homo(nym)s, are ye?
Nobody commented on the subtitle.
I'm kind of perturbed by the "Software-reskinning box designer" subtitle.
Not very objective, for journalism.
Not very true, either.
It come across as pretty damn petty to whinge and bitch about a company, which is shipping actual physical product to make their money.
Let's give them their due, no matter what OS you prefer, they've all been made better by Mac.
Any "bubble" status of Apple stock is being created by investors and brokers, not the company themselves. I'm pretty sure that if any PR department could gin up a $400 stock price, KMart/Sears would be a happier place.
Duh.... Quoting deduplication backup times is disingenuous.
That 3.6 minute is probably based on 1 byte changing in the file, which is not a realistic use case.
Also, there's no mention to how much additional CPU load is put on the backup client during this "client-side deduplication". Nice way to crater a VMware environment.
Also, TSM 6.2 had these features over 2 years ago.
It's not ZFS...
It's ADVFS from Tru64/DEC Unix.
Google music is inferior to amazon mp3+cloud.
Googles android player blows, especially the ui...
Plus amazon offers unlimited music storage if you buy an album. Better player, better cloud, and the fire makes a nice ipod replacement.
better to thank click and clack...
Gone but not forgotten...
Mission control is not a replacement for expose: All windows.
Apple OSX and network storage
To get the best performance with OSX and networked storage use AFP when possible. There is FOSS program called NetAtalk which works really well on Linux and FreeBSD. I can get 102MB/s transfers using jumbo frames.
NFS works OK, but your filesystem gets littered with .(dot) files, and when using coverflow or thumbnails finder does wonky things over NFS.
What a stupid metric.
Comparing the "platter read speed" of a 5MB platter VS a 300GB platter is pure inanity.
Perhaps you'd like to compare the toilet flush latency of an Airbus 380 VS a Clipper? That will prove how airplanes haven't gotten any better in fifty years.
Come to think of it, the whole article is crap. I have a Selectric III in my office, perhaps you can run an article on Oxford counties oldest typewriter.
Unlimited or 100 articles per month?
According to the page to link your dead tree account to your digital account, you only get 100 articles per month for your Unlimited service.
*Includes access to 100 Archive articles every four weeks.
How many Mexican landscapers are there in Afghanistan, that they can plant their payload masked by Juan and his leafblower?
Is this a realistic scenario?
Osama: What is that buzzing noise?
Underling: It is Faisal edging the cave lawn.
Osama: Tell him to put the curbstones back this time.
Underling: Look sir! A candygram for you was on the doorstep!
Osama: I love the ginger chockies.
- Asteroids as powerful as NUCLEAR BOMBS strike Earth TWICE YEARLY
- Review Ubuntu 14.04 LTS: Great changes, but sssh don't mention the...
- Vid CEO Tim Cook sweeps Apple's inconvenient truths under a solar panel
- Got Windows 8.1 Update yet? Get ready for YET ANOTHER ONE – rumor
- Feature Reg man builds smart home rig, gains SUPREME CONTROL of DOMAIN – Pics