A US cloud storage provider is being sued because it did not provide a recoverable backup of TV show files deleted by an aggrieved ex-employee. zodiacisland F-A-I-L spells... fail! CyberLynk, headquartered in Wisconsin, was used by a Hawaiiian TV show production and distribution company, WeR1 World Network, to store episodes …
Goes without saying but....
These are exactly the sort of people Douglas Adams parodied with his SEP field.
How can anyone believe that putting your only copy of critical data on someone else's systems without running unannounced tests is a good thing. Not to say the cloud is bad per se, but to rely on a sales pitch for your business strategy???
I just can't be bothered to get into the 'hold a single copy' aspects.......
The title is required, and must contain letters and/or digits.
And that's why I'm not moving my data to the cloud. kthxbye.
Cloud is fine..
..if you treat it as an asyncronous off-site backup.
All they have to do is go to Goospit, type in Zodiac Island torrent into the search field and voila, they'll have loads of them...like this one I just found
How many of those are transport streams?
How many are transport streams is probably irrelevant.
That they exist at all means pirates downloaded them. If the company want their shows back, they can offer immunity against prosecution* to the first person to provide them with broadcast quality copies of their shows.
Obviously only for those shows from the company.
"Jewson appears to have admitted what he did and offered to pay restitution, but WeR1 says he can't pay what they will lose."
Surely standard US practice is to sue for billions more than the individual can ever hope to earn in their lifetime.
in this case I think the studio actually needs the money.
The clue is in the name.
Anyone who commits anything important to a temporary coalescence of nought but vapour will get everything they deserve.
I'm dumbfounded by people flapping around on various forums about how to download and backup their own Flickr stream. I mean, it's a step in the right direction, but a step from **** knows where.
We need a Nelson icon!
The basic fail of cloud
This highlights one of the biggest problems of the cloud.
It *works* because it allows business to reduce costs and move all their IT storage and processing to someone else, who can, in turn, leverage economies of scale to reduce costs.
All well and good.
However thanks to a Dilbert style conflation of sleazy salesmen and gullible executives the reality is far from rosy.
Once a company outsources all its IT to the cloud, it makes a great saving but there is nowhere left to back up so they are truly at the mercy of the cloud provider. Every one of their business security and continuity risks remain (and some are magnified) but they are no longer in a position to control any of them - instead they have to hope the cloud provider has done it.
The alternative is to use the cloud, while keeping a complete copy of everything on your own systems. Rarely is this going to be cost effective..............
The title is required, and must contain letters and/or digits.
"The alternative is to use the cloud, while keeping a complete copy of everything on your own systems. Rarely is this going to be cost effective"
Well it is if you dont mind speed issues. It also depends on your net connection and size of data. The same can be done with a small sata box loaded with off the shelf sata drives in a RAID 6 array (good old openfiler, how I love you). a 8tb box can be had on a domestic noname 6 port motherboard for very little money. Coupled with a linux headless PC that can CRON and you have an onsite backup box.
Well not all risks remain
If your offices burned down at least you have your off site backup. I suppose however that if you can be bothered to back up data to offsite, that it really shouldn't be a huge extra effort or expense to maintain a local backup, even if that's just a big raid array.
In the real world yes. In the world of contracts
you have transferred total responsibility to the cloud company (assuming you've written the contract correctly of course), and can sue them into the modern equivalent of forced servitude, plus make the lawyers rich in the process!
How can you not love that?
Some reasonable suggestions - however.
1 - yes, if you dont mind the speed issues you can keep local copies yourself. However this involves maintaining your own IT infrastructure (however small). This is what starts to eat into the costs of using the cloud. This is especially ironic as the concept is sold to the business as freeing them up from the costs of local IT....
2 - Yes if your office burns down you have offsite backup, but that depends on the cloud implementation you have gone for. If you are using the cloud as a managed IT service (as seems to be the case here) then you would like to think that the cloud provider has the backups. As this case shows, that isnt always true. The risk remains - and in some instances the business has removed their own ability to control the risk.
3 - and contracts are vital. However, there are things a contract cant protect (reputational damage for example) and some items are so irreplaceable no amount of contractual fine will help. All this assumes the cloud company has the resources to pay the claim you make, but doesnt have the legal clout to water it down. Try suing Microsoft, Google etc into forced servitude. Even if you have that mythical beast of a water tight contract it will be a hard battle.
I am not a cloud fan....
Coincidentally I just got this :-
"<Big Name ISP> is hosting complimentary seminars for CIOs, Finance Directors and business leaders looking to enable business strategy and drive business transformation in new and cost-effective ways."
I am not a cartoonist, so I won't be going, even for a free lunch.
It may not have been that...
...there were no backups, but rather that said backups were sabotaged in addition to the originals, thus preventing complete reconstruction. Backups are prudent policy, yes, but they're still not bulletproof (nor proof against sabotage).
I think I'd prefer the "no backups" theory
since the backups should theoretically have had at least one set offsite and disconnected from the cloud. If the ex-employee also managed to get to the offsite backups, that a greater degree of incompetency and even more difficult to fix.
The Cloud Didn't Fail
There was no failure of the Cloud in this case.
An employee maliciously deleted files. That is not system failure. After the employee was fired, his access was not removed. That is failure number one. Failure number two is that the files were only stored in one location. The same problem could have occurred in an Enterprise storage environment. Who is at fault depends on the reason for the two failures, detail that we don't have.
Not failed ?
So the cloud was a service for the storage and processing of files. The buyer of said service put their files in the cloud. Then the files were not there. The buyer of said cloud service did not delete the files, nor did they forfeit them by not paying.
In what way didn't the cloud fail?
You can argue semantics about WHY the cloud failed (e.g. insufficient security, insufficient backups, etc.), but the fact is that the cloud failed. You can argue that the disks in the cloud didn't fail, you can argue that the servers didn't fail. But the total service, "the cloud" failed.
"The Cloud" failed to anticipate that what it had every reason to view as a valid command from what it had no reason to believe (pardon the anthropomorphizing) was not a valid user was not what some other valid users wanted it to do. Reminiscent of those stupid cars that permit drivers to run into things.
The blame lies mostly with WeR1. To not have a local backup of company critical data is inexcusable. Cloud storage should be a last resort for critical data not the ONLY storage mechanism.
Also the fact that backups are not ISOLATED is another problem. A virus could have mimicked the above. Remote access should not be able to compromise whole systems.
"The same problem could have occurred in an Enterprise storage environment"
Not on my watch. Removeable storage exists for a reason - fire could have mimicked this problem. This cloud company is saying they have no backups? wut? I have 2 months of rolling backups for our small 1Tb total data company. I use removeable hard drives. I find it hard to believe that a cloud company doesnt do the same.
With removable storage going for under US$100/terrabyte, neglecting to keep local backups is the real "Fail" in this situation.
I've worked in an environment where cloud was the only
strategy, and that was before the cloud became The Cloud(TM). In that environment, it is not unreasonable to make the primary storage service provider is producing a backup level which exceeds yours. That does mean checking some details on the contract, but the assuming those were in place, WeR1 is not to blame.
Storing data in the cloud is great, you can access it from all over the world and if your house burns down or you lose your disks then you can get everything back, but I would never trust that as my only copy of something so important.
I have my emails stored only in the cloud because that's convenient, it would be annoying if I lost them but not the end of the world. My photos are in the cloud on two providers and on my local drive because they're important.
If the saboteur didn't bit format the drive (just deleting the index with a standard format), then surely a commercial data recovery outfit could recover the majority of the lost data?
depending on the provisioning of the drives, the data may have been striped across multiple drives that could be difficult to remove for recovery purposes and/or the drives may be so frequently used the required data are effectively irretrievable due to having been overwritten so many times.
Cloudy, rain expected
300GB is nothing these days; this priceless and irreplaceable data could have been backed up with little more effort than a trip to Walmart to buy a $30 USB disk drive.
I'm sure they have some excuse, and I'd love to hear it with my palm upturned supporting such a drive.
I wonder if there was a manager
that insisted on there to be only one copy because of the "problem" of piracy? Just like the BBC ordered sent out copies of Doctor Who to be destroyed so they couldn't be shown again.
no local copy of the data?
According to this article it was only 304gb of data and it appears that the ONLY place it was stored was on this cloud storage. Surely they should have had a local copy of this data as well, heck a 500gb sata drive costs less than £50 and this would have been good enough to keep it on.
If the show has been shown on several network TV stations then time to start asking for copies off some of the TV stations to get their data back
then time to start asking for copies off some of the TV stations
And how do these idiots propse proving its their copyright? maybe the TV stations have won some IP
I don't know, but if it does work I work for 20th Century Fox and we've lost the disk with all our Futurama Episodes on!!!!! (Sounds like a great way to social hack)
This doesn't surprise me...
I used to work supporting a certain TV show, dealing with supernatural themes.... and well they were tighter than a Jewish Scottish ducks butt when it came to spending on critical IT stuff.
They had all their seasons work on one big 16Tb raid, and NO off site or other backups.
One day the RAID started squawking because 1 disk had failed... but instead of calling us... the producer took it upon himself to swap the disk out.
Pulled 1 disk oooops wrong disk .. pulled 2nd disk.. oooops wrong disk.... pull 3rd disk... still wrong disk... fuck....
RAID Is now killed...
Oh will I will just reinsert all the disks and run Chkdsk..
It was only AFTER he did all this, he called us..
Yeah.. that was a fun conversation..
Did they blame
Ghosts in the machine?
Outsourcing Cannot Remove Risks
Ironic that this item appeared yesterday. This past Friday, I presented a "Agility, the Cloud, and Accountability: What you can't know can kill you" as part of the Trenton Computer Festival Professional Conference (presentation at http://www.rlgsc.com/trentoncomputerfestival/2011/agility-the-cloud-accountability.html).
The basis of that presentation was that moving anything (e.g., tasks, processing, storage) to "the Cloud" cannot remove risk, it can only redistribute it. It is also very clear that a "redistribution" can seem to make risk disappear by obscuring it from view. However, in a manner reminiscent of multiple financial crises, it merely moves risk "off balance sheet". It does not destroy the risk. Moving to professional management should reduce the risk, but it is never eliminated.
In "Why Settle on a Hosting Provider? Bandwidth liquidity and other issues", the May 12, 2010 posting to my blog, Ruminations, I noted that providers are vulnerable to resource liquidity crises. Hosting providers who offer "unlimited" usage plans are clearly vulnerable to liquidity crises, runs on resources similar to bank runs, when more than the expected demand occurs. This is nothing new. Bank runs are legend, as are congestion crises on utility networks during surge periods (e.g., Mother's Day [telephone], water systems [Superbowl Sunday in the US]).
Employee malfeasance at a provider has similar risks. Automating processes so that a single individual can run massive infrastructure also increases the risk that a mis-operation (deliberate or accidental) will have system-wide implications.
RAID presents a similar hazard. RAID is a solution to drive failures, not a solution to software errors. A RAID array will dutifully copy incorrect data to all copies.
Risks can only be ameliorated by carefully implementing overlapping protections. There are no "magic bullets".
Loss of Data
This seems similar to the e-mail my wife got telling her that Chase security was partially breached...but anyone should only have her e-mail address. I am a Chase customer; no such notification. Containment is the name of the day, aye?
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base