10 posts • joined Wednesday 12th May 2010 21:28 GMT
Backup? Or just a copy?
Many companies sell "backup" that is really just a copy of your data. Dropbox works this way, for example.
Those are great if you need to recover from a catastrophic loss, like your computer is stolen or your home burns up. They are not helpful if your data is damaged.
I've managed backups at a few companies. A large portion of restore requests were because a file was damaged or important things were deleted and then later discovered missing. "This spreadsheet worked last week and now it's missing half my clients!"
I describe backup as "I need my data back from when...." If you use a file sync program, or Dropbox or iCloud or Google Drive or whatever, you have many copies of the latest version -- good or bad. I hope you never make a mistake or get a computer virus or have your program crash while it's saving the file. Also note that a virus will search your virtual drives for things to infect.
I've used CrashPlan for 3 years and have nothing but praise for their software and their service. Like Phil O'Sophical, I worry that if someone else has my data, that does not mean *I* have my data. For an extra fee, CrashPlan will ship your data on a hard disk. That fee is my cost to exit their service. Poor Mozy customers never had that option when they jacked up the prices.
Electronics can interfere with radio comms
If you've ever left a mobile phone near computer speakers, you've probably heard an occasional garbled interference sound from the speakers. That's electromagnetic interference from the transmitter in your phone.
The problem with radio-capable electronics is that they can similarly interfere with aircraft communications (voice comms, anyway) in the cockpit. Pilots use the radio quite a lot, both to communicate with air traffic control, and to maintain situational awareness of what else is in that airspace: other aircraft, local weather phenomena, etc. This is most critical during takeoff and landing phases.
It's grand that the aircraft systems don't cause unpleasant interference on your mobile, but that's really not the issue. The issue is that your electronics might cause interference with aircraft systems. That won't lead to a Hollywood-style in-flight explosion, but it could cause the pilot to miss hearing something important. "Was that approach clearance for us or someone else?" "Did Tower say 'cleared for 27R' or 'cleared for 27L'? Or was it NOT cleared?" "Wait, is someone supposed to go-around (instead of landing)? Are we?"
Yes, iPads and Surfaces are assigned for cockpit use at some airlines. Each such unit was flight-tested to be sure it doesn't cause interference. Not all units pass.
Source: pilots who complain about bloody mobile phone interference.
No license = no permission
I'm involved with an open source project (which happens to be hosted in github).
We use the GPL. We respect others' licenses, and we hope that others respect ours.
If you do not give us a license to your work, we cannot use it. If we use software without any license to do so, that's piracy. If we include unlicensed work in our project and release that under a license, that's fraud. Not only are those unethical, they could jeopardise our entire project.
We have no money and no paid staff. We're not "fat cats." We're just honest.
This is another excellent article from Trevor Pott. As usual, he gives enough background to understand where he's coming from, and presents new material in a way that's both readable for all and technical enough to really learn something.
Ah, Retrospect! I used it for 10 years. Its manual from 1996 is still my gold standard for product documentation. Inside the cover was written something like "Most people don't read the manual until they have a problem. If you need to restore data, turn to Chapter 7, page 82."
After managing backups at a few companies, I've developed 2 informal rules:
2) Nobody cares about backups; they care about restores. Think of it as "restore software" and much that is muddy becomes clear.
3) "You have a backup? Okay, point to it. Can you tell me what's in it?"
P.S. Rule 1 is, as always, "If it's not backed up, it doesn't exist."
Big news in Drobo-land
These two new models look pretty nifty. Drobo's big selling point has always been the ability to incrementally upgrade storage capacity. Now they've added the ability to upgrade performance too.
The Mini fills a hole in their product line. None of their products were really portable (are other RAID products?) This could be a great device for photographers and video editors in the field. When you're out shooting as much as possible on a trip, you can't afford to lose your data to a drive failure, and any time spent fixing or preventing computer problems is a distraction. DVDs are good for backup, but you still lose any work since the last DVD.
I thought the Drobo S was a pretty sweet little RAID box. Like other Drobo products, its use is intuitive: insert drives, perform basic configuration via their software utility, and go. Its weakness is a slow CPU. On this basis alone, the Drobo 5D should be a real improvement. Also, the use of SSDs with their faster seek times could cut the cost of Drobo's internal "housekeeping" quite a lot.
Data tiering is where this gets really interesting. Drobos don't blindly stripe across disks the way RAID5/6 does. They basically pool the storage and then distribute and re-distribute data around in a way that provides redundancy, levels data across disks, defragments files, recovers from a drive failure...whatever will maintain redundancy and improve performance. It's not hard to see how Drobo could rewrite their software to take advantage of low-latency SSD. I have no experience with their B1200i, but expect they'll do some pretty creative stuff with data tiering on their business products. Of course, this is early days, and we'll probably see improvements in future firmware updates.
Disclosure: I'm not affiliated with Drobo. I've been a Drobo customer for 4 years with a 4-bay, a Drobo S, and now a DroboPro.
Risk is overstated
This deadlock affects authoritative DNS servers that service queries while processing an update or sending a zone transfer. Let's see what that means in the Real World:
An authoritative (not recursive) DNS server that serves a high rate of queries AND accepts dynamic DNS updates is at risk. Small to medium sites might run DDNS on their Internet-facing servers. Large sites use MS DDNS on only internal networks, and would use Active Directory for that anyway.
An authoritative DNS server that serves a high rate of queries AND performs frequent zone transfers is at risk. A busy and frequently changing site like an ISP or hosting facility might indeed perform frequent zone updates, maybe even several per hour. Such a business should already monitor service availability and kick-start services that stop responding. They're also well equipped to set up extra nameservers and run BIND single-threaded.
DNS servers that aren't getting pounded with queries don't have to care. Nor do single-threaded servers. Or caching-only servers. DNS servers with decent monitoring will be minimally affected. Large sites with robust DNS architecture already plan to lose the occasional nameserver anyway.
It's mostly the sites with big IT requirements and small or inexperienced IT staff who need to worry. They need to worry about everything, though. And how many of them run the latest versions of BIND?
I'm sad to see this. Mozy has been around for a while and my impression (as a non-customer) has been that they're pretty solid -- at least as good as Carbonite or JungleDisk. It sounds like the same EMC disease that killed Retrospect (my previous system) is afflicting Mozy.
That said, I agree with J. Cook. I advise people that they "have a backup" only if they can physically point to it. If they tell me someone else can point to their backup, then that other person "has" their backup. As long as they're comfortable with that relationship, all is well.
After 18 months with CrashPlan, I've had no problems and have been pleased with their customer service.
Good luck to everyone and their data.
"Mistake", not "accident"
Google seems to use words very precisely. Including the packet-capture code "was a mistake," not necessarily an accident. If I were to rob a bank and get caught, I'd readily admit that doing so was a mistake. (Perhaps my mistake was in choosing the wrong bank or in wearing a poor disguise, but those details need not be discussed.)
I too am interested in their motivation: Google appears to be driven by money, and I can't see how they would have monetized the contents of data packets. Headers, sure, but payload? And why save it for so long?
Were they merely recording extra data to fill in blanks if, as suggested above, their GPS cut out?
Did they want the option of later analyzing the data for statistical purposes, perhaps to identify trends in use of encryption over wireless, or adoption of newer 802.11 standards?
Could a map of wireless saturation be useful when considering future products?
Is there a "data is good" kind of pack-rat culture that just retains any data as long as possible, without any specific intention?
As big as Mozy are, this leads me to wonder if they're feeling pressure from integrated online/offline backup services like CrashPlan.
It would be nice to see more competition in that market.
Not hard to do
I assume DENIC uses the BIND nameserver software. If so, this is an easy error to make. I'm not excusing DENIC, just observing that the scale of such a problem is often far out of proportion to the cause.
BIND parses a zone file as it loads it, and ceases loading (that zone) after it encounters an error. This could be a syntax error or an invalid use of a record. "Invalid records" are not always obvious: it's easy to accidentally create a host entry within a zone that was already delegated out.
Unless their update process (presumably either Perl scripts or a product like Men and Mice) correctly validates the entire zone *using the same criteria that BIND is configured to use*, errors can slip through.
I used to work at a large Internet site, and have pushed out more than my share of broken DNS zones. Good tools help immensely, and a good architecture insulates certain problems from reaching the Internet. None of those achieve perfection, though.
The only way to reliably catch this sort of problem is to carefully test all changes in a staging environment first. In the end, it's a balancing act between the time required to perform all due diligence and the quick turnarounds (and in this case high volume) demanded for DNS changes.
 The Men and Mice tools revolutionized how we managed DNS.
 For instance, push changes to a hidden master and let that master propagate to your main (slave) nameservers. Monitor the bejeezus out of the content on that hidden master. Run scripts that parse the named error log and immediately alert if there's a potential problem. Et cetera.
- Facebook offshores HUGE WAD OF CASH to Caymans - via Ireland
- Review Best budget Android smartphone there is? Must be the Moto G
- NSFW Confessions of a porn site boss: How the net porn industry flopped
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene