Feeds

back to article Buying a petabyte of storage for YOURSELF? First, you'll need a fridge

A good friend of mine recently got in contact to ask my professional opinion on something for a book he was writing. He asked me how much a petabyte of storage would cost today and whether I thought it would affordable for an individual. Both parts of the question are interesting in their own way. How much would a petabyte of …

COMMENTS

This topic is closed for new posts.

Page:

Thumb Up

Assuming you don't need redundancy...

You could fit 1PB in a shorty rack, assuming four 4U, 60-bay boxes and 4TB 3.5" drives, plus a switch to last them together. That probably wouldn't fall through most people's floors; but powering and cooling it might be a different matter!

4
0
Anonymous Coward

Re: Assuming you don't need redundancy...

Where can you get a 4U chassis which will take 60 x 3.5" drives plus motherboard/controller?

I know Supermicro do one with 36 slots (24 front / 12 rear), or you can get 45 in a custom 4U box as per Backblaze. So at a push you could do this in 6 x 4U, and that would even give you some overhead for RAID drives too.

Of course, if you are prepared to pay for a petabyte of storage, then almost by definition the data is valuable to you. So for me, the big question is this: how are you going to back it up?

0
0

Re: Assuming you don't need redundancy...

>>Where can you get a 4U chassis which will take 60 x 3.5" drives plus motherboard/controller?

I assume he's talking about a toploader rather than a front/back loader like the DNS-1660 which has 60x3.5" it's about $10k although I've only ever seen it specc'd with 2Tb drives.

1
1
jai
Silver badge

Does it really need all that much stuff?

First, you'll need a fridge. Plus a few hundred thousand quid, a reinforced floor

You can get 20tb with one of these:

http://www.lacie.com/products/product.htm?id=10607

So you'd only need somewhere to store 50 of those - and $109,950 to buy them.

Yes, sure, this probably isn't the ideal enterprise solution, but when talking about personal use, if you decided you absolutely had to have a petabyte of data, and you have a spare room or basement space, you could do it. AND if you set it up in your basement, you'd probably never need to heat your house :)

2
0
Silver badge
Happy

Re: Does it really need all that much stuff?

"Does it really need all that much stuff?" - to make it useable/useful, yes.

Plus something for them to plug into and consolidate it into a storage array. Thunderbolt allows 7 (or 6? not sure if it includes the host) daisy chained, so a machine that has 8 or 9 ports would be in order. Plus a floor that would support ~ 400kg, and power into the room that could support ~3kW (plus whatever server) across 50 plugs. In one room.

For "race to the bottom" storage, I picked up a 3TB external USB3 drive from PC World over christmas (was an emergency in my defence!) for under £120. Get 334 of them for £40K. But by the same token I wouldn't consider 6,100 floppy disks as a useful alternative to a DL DVD :-)

0
0
Stop

A floor that supports 400kg?

A floor strong enough to support 5 biggish guys? I'd be worried if you thought any of the rooms in your house couldn't deal wih that.

2
0
Anonymous Coward

Re: A floor that supports 400kg?

Long-term loads are significantly different to your example. 400kg load would need to be spread over multiple joists otherwise you'll rapidly end up with a dip in the floor which will not recover.

0
0
Anonymous Coward

Bitcasa?

You could just use Bitcasa instead. Then you get a virtual drive with unlimited storage.

0
0
Meh

Re: Bitcasa?

I Just had a quick look and there is currently a free trial then 10Gig remains free and a deal of $10.00 / month for unlimited storage.

Now that sounds great but, as with all things cloud you have the issues of Data security (even if it is only your old photos and music)

will Bitcasa last?

will Bitcasa have an elegant back out solution should the company fold?

will they get you hooked and then up the ante by charging $20 / month?

Ultimatley will they have a "fair use policy" (since we all know nothing is "unlimited" in this day and age)

Most readers here would have enough cynicism to tell Bitcasa "you can have my data when you prise it out of my cold dead hands!"

3
0
Silver badge

Re: Bitcasa?

Will all your data get deleted by the authorities after a police raid?

Megaupload anyone?

5
0
Anonymous Coward

The rights man, the rights

"seems madness to me that a decent proportion of the world’s storage is storing redundant copies of the same content"

Yes mostly, but for telly it gives you more control - iplayer giveth and iplayer taketh away depending on the rights negotiated for that show, but my local copy will still be good in a year.

17
1
Unhappy

Re: The rights man, the rights

Very true. The cries of incomprehending agony from a 2year old when "The Gruffalo" and "The Gruffalo's Child" disappeared after a month, about a year ago, having watched those daily for a month... It's still fresh scars.

6
0
Silver badge
Boffin

Re: The rights man, the rights

".....but my local copy will still be good in a year." True, but how much of it do you actually go back and watch even once? I'm continually rooting through my Sky box to delete content the family or I have recorded and then watched and forgotten about. On the shelves I have 200+ DVDs purchased and the majority only watched once, despite my conviction at purchase that the films concerned were so good I'd be watching them regularly.

But I do have a couple of TBs of unique family pics and videos I do want to keep "forever", which are backed up to tape and to a cloud host. I don't necessarily need lots of fast spinning rust in my man-cave, but I still do need some storage somewhere.

2
5
Silver badge

Re: The rights man, the rights

"...but for telly it gives you more control..."

And doesn't leave you without your data when your link runs slow/stops.

My connection was playing up at home last week, turned out to be an iffy filter on my phone line, and couldn't watch iPlayer for more than a few seconds at a time till the fault was found and fixed. May take longer when phone poles come down/road workers dig up your fibre.

2
0
Silver badge
Linux

Re: The rights man, the rights

> True, but how much of it do you actually go back and watch even once?

That's a separate issue. Even if I only watch something once, it will be a much better experience if it is coming from my own media horde, on my own high speed network, using my own chosen player.

Since it's under my control, I have all the time in the world to get around to it. I will never have to worry that someone's licensed expired. I won't have to ever deal with network outages or bandwidth caps.

Space is cheap. Content is also cheap.

3
0
Anonymous Coward

Re: The rights man, the rights

Probably true for most people, though I know that when I was in my early 20's I wished I'd had regular access to many of the films on my shelves that I now don't watch. I guess that's just life and one's changing interests.

As for family photos and vids ... sure we all think they're gold but in reality they also lie unviewed for decades if not forever, with perhaps a few choice pictures turned into wall space.

I remember back in my 20's picking up photos and old film from a house clearance ... I guess the advantage of it all being stored digitally is that it's easier for someone to just erase it all when you're dead.

0
0
Anonymous Coward

Reference point

We have a rack of storage with 140 * 1TB disks and 4 SSD for intent-logs, that lot consumes just below 3kW including dual heads and could reasonably provide 100TB of 'raw' protected storage depending on the RAID configuration and spare disks, etc.

4TB disks are available and they would push that to about 400TB/rack, so you could get over 1PB of protected storage in 3 racks (each about 400kg?) and with a total power consumption of around 12kW including some air conditioning.

Certainly far from a home system!

2
1
Silver badge
FAIL

Several problems that I can see

1) You have to trust that what you give to other people to store is going to be safe. Megaupload comes to mind here. Regardless of the rights and wrongs of the case of lot of people have had their stuff taken away from them with little chance of recovery and through no fault of their own.

2) Where is the data going to be stored. Yes, in Europe there are regulations that ensure that things are protected against snooping, though that can surely be got round. Agreements with the USA are again, in place but the US government seem to have a cavalier attitude to anything that they think comes within their ambit, and finally what if the provider decides that a nice cheap server farm in say, Uzbekistan, is what they really need, what protection do you have then?

3) Cost. I don't know about you but I have to pay for the pathetic broadband service that I am forced to use, too far from the exchange, rural area with little chance of fibre connections etc. So if I decide to ditch my local storage I will in effect have to pay rent to get things done.

4) At the cost of repeating good ole Bill G's remark about 640K being enough for anyone what would you do with a PB of storage, unless of course you run something like Pixar.

No, with the advent of SSDs in ever larger capacities if I need to expand my data storage that's what I'll use and keep MY stuff firmly away from this cloud thing that everyone seems to want to push us into.

One thing for sure, they aren't doing it as an act of charity, they see a way of getting rents in perpetuity and personally I'm not buying that.

9
0
Bronze badge

Re: Several problems that I can see

"what would you do with a PB of storage, unless of course you run something like Pixar"

Well, Let's see...

DVD's are about 5GB per platter for basicly SD video.

Bluray bumped that up to around 30GB for HD video.

Extrapolating, 4K video would be about 120GB and 8K would be pushing 500GB. Anyone know what MegaImmersiveHolySh*tIsThatRealHD requires? ('cause ya know we ain't done yet). And that's just resolutions. Framerates appear to be on the rise too, if "the Hobbit" is any indication.

Photo resolutions keep bumping up, audio bitrates keep increasing along many axes.

Yeah, much of that can be stored in the cloud. Once we all trust the cloud...

...and have reliable access to the cloud from whereever...

...and have enough bandwidth for the larger streams...

Once, Mega was the stuff of storage, now it's barely memory. Giga was the stuff of dreams, now you need at least 2 of them just to start your PC. Peta, Exa, Yotta and even Zeta will all eventually fall into the personal orbit, even if we can't imagine how at the moment.

1
0
Anonymous Coward

Re: Several problems that I can see

and yet the general quality of the content hasn't managed to keep up with production quality.

Now I loved the Hobbit in all it's 48fps glory, but if I buy it for my home consumption I will get it as a dvd. Why? because it's good enough.

Even today the vast majority of TV isn't HD and much that isn't isn't even 1080. So I'm going to suggest that unless the costs of 4k etc quickly plummet to the £500 mark for a reasonably sized screen It's going to be another one of those technologies that simply disappears or takes forever to become 'common'.

Personally I think there are far more exciting things to be spending money on than a screen that will hardly get used and when it does it wont have enough 'native' content to impress. However I appreciate that there are a lot of people that don't have lives and become traumatised if they miss their favourite shows.

0
0
Bronze badge

Re: Several problems that I can see

Oh, I agree whole-heartedly. Personally, I'm usually much more interested in the story than the resolution or the framerate but then, I do know people who will spend $200 on an audio cable to get that extra half dB or dynamic range or what ever it is a fancy cable gets you.

Besides, all these technologies will plummet to cheap as chips eventually if history is any indicator.

0
0
Silver badge
Boffin

Re: Several problems that I can see

Yeah ... cloud storage ain't going to be the sole solution. Cloud outages will ensure that all of us will keep up storing stuff at home for years, not to mention avoiding the Megaupload situation, bandwidth caps & such. Even if we had infinite bandwidth and no legal issues, it would be like moving out of your house and paying rent forever. And "renting out storage" in the 'net is much more expensive than just buying a ton of HDDs, or even SSDs.

For example, you'd get a 10TB RAID0 ThunderBolt device from LaCie for $1100. On a certain "Cloud Storage" Provider, 1Gb (and REAL Gb's, the 1024-based ones and not the fake 1000-based ones HDD mfg uses) costs 10 USD cents. That would be $931.32/month. That is ... in 2 months, cloud storage ends up being *more* expensive than an equivalent storage option which is not only local, it has a stupidly high transfer rate (750Mb/s).

So I don't see the cloud taking over for everything we want anytime soon.

0
0

Disappointing...

That what I thought would be an interesting article with some facts and figures, ends up wishy washy with the answers supplied by the commenting community.

Upvotes to all those that could be bothered to supply the relevant information.

10
1
(Written by Reg staff) Silver badge

Re: Disappointing...

"the answers supplied by the commenting community"

Well, that's half the point of enabling comments on Martin's storage blog. His Storagebod pieces are supposed to generate discussion on top of our usual storage coverage. The comments aren't just there to let off steam at work by throwing rotten tomatoes at writers :-)

C.

2
3
Joke

I pretty sure some of the staff here could hit a petabyte in storage quite easily...

0
0

'tis an out of place ending.

I did wonder why this article ended on completely a different not to which one might expect, it's almost two articles in one and one without a title, it was a good article though and worth reading which is what it's all about in the end.

0
0

I read a news article recently (forgot where) which says some boffins have just invented a 1TB flash drive...

1
0
Bronze badge

Good point

There's no explicit mention of spinning HDD's in the question. 1024 flash drives hooked up together would certainly be lighter and less power hungry than HDD's. Wiring them up would be a tad trickier though...

0
0
Anonymous Coward

Re: Good point

And the cost of said SSD?

For the immediate future HDD offer much lower cost/GB than SSD, though that may well change as other non-moving (but not necessarily flash) storage becomes available.

2
0
FAIL

LTFS...

"Never underestimate the bandwidth of a van full of tapes"

So if it's just a peta of storage and no need for fast access/IO I propose a second hand LTO5 with one drive slot and LTFS, shelves in a cellar and a tape monkey. With the server AND cooling I estimate the power draw at 1KW + bananas for the monkey...

With a good Lego Tape Autoloader you can even automate the process and make some economies on bananas and fire the monkey... (www.youtube.com/watch?v=HzprDvw8rOk)

1
0
Bronze badge

Fairly easy to work out

I you are looking for the cheapest possible solution:

250x4TB disks @ £135 = £33,750

If you intend to have a reasonable degree of reliability, you'll probably need to run these in RAID60 arrays of 4-disk RAID6-es, which means you'll actually need 500 disks. You'll also preferably want to at least mirror this, so call it 1000x4TB disks, so a total of £135,000.

Then you'll also need 1000 SATA ports. If your load is IOPS rather than MB/s bound on each disk, you can probably use some cheap 1:5 SATA port multipliers, but assuming you put each 4-disk RAID6 set onto a separate multiplier, you'll need 250 SATA controller ports. If you are keeping it down to the cheapest possible cost, you can probably get away with using one of the cheap 4-port PCIe controllers based on Silicon Image 3124 chip, at about £10/port (both for the SATA cards and the multipliers, so about another £10,000 in total.

For this you will need 63 PCIe slots. Assuming an average motherboard has 4 PCIe slots, you would need 16 motherboards. You could potentially halve this if you use 8-port SATA/SAS cards if it turns out to be more economical to do so.

Then you'll also need a case for each of those 16 motherboards that can take 64 disks each - this is likely to be the most problematic part, the choice past about 24 3.5" slots is very limited indeed. Add a few thousand for that.

Maybe £150K in total with decent reliability redundancy if you do it on the cheap. Add a 0 or two on the end if you buy a branded solution from a $LargeStorageVendor.

For the power budget, you're looking at about 8W/disk, and about 200W/motherboard (including the CPU, networking gear, SATA cards, so 8000W for disks + 3200W for the motherboards = 11,200W. At about 12p/KWh, thats about £11,775/year in electricity costs. In theory all the components will come with about 3 years' warranty, so having a few disk failures per day shouldn't be adding to your costs.

The real problem with this sort of setup isn't the actual storage itself - it's what you are going to do about the backups. That's where you start getting into _real_ difficulties.

2
2
Anonymous Coward

Re: Fairly easy to work out

I think that for arrays of this size and with your desired level of redundancy you're better off abandoning the traditional RAID algorithms and using the slightly more general (in the mathematical sense) Information Dispersal Algorithm. Using your example of RAID6 (or RAID60, since they both have the same space efficiency of 1/2) with mirroring (RAID1, again with space efficiency of 1/2), you need four times as many disks to protect against drive failures. The failure cases that this can deal with are:

* up to two disks (of four) in a RAID6 pool failing

* one RAID60 controller failing (since you have RAID1 mirroring above it)

Unfortunately hybrid RAID solutions have to deal with many failure cases and besides making things very complicated, your example has some failure cases that it can't deal with:

* faulty top-level RAID1 controller

* simultaneous failure of three or more drives in a RAID6 pool or the RAID6 controller with something similar happening in the mirror RAID6 pool

The first case is the biggest (single) point of failure in the scenario you describe. The second case seems like it's probably rare enough not to worry about, but only if you're just factoring in the failure rates for drives and controllers. In reality, you're probably talking about a single system (motherboard, power supply, RAM, NICs, etc.) controlling each RAID6 (or maybe RAID60) pool, so you're putting a lot of eggs into one basket, so to speak, so your actual failure rate is going to be higher than a simple analysis of drive and controller failure rates would indicate.

There's another aspect to all this you need to factor in, too, and that's your networking costs. Assuming that you actually need to access your petabyte of storage (and it's not just for occasional archival uses) you're going to have to invest in some serious, redundant networking hardware too. You'd basically probably also want to build a compute cluster in parallel, so that you can distribute your workloads so that they effectively move closer to the data they need to work on. Otherwise, there's not much point in having a petabyte of storage if you can't access it without your network becoming the bottleneck...

So they're some of the disadvantages of RAID (and petabyte storage in general). I'll follow up with another post describing how I think IDA would be better.

0
0
Anonymous Coward

Re: Fairly easy to work out

Continuing on from my post above, I'd like to say why I think an Information Dispersal (IDA) scheme would be better than RAID (or specifically RAID1 over RAID60) for this type of thing. First, what it is is basically the same thing as the RAID algorithm, except that instead of distinguishing between stripe blocks and parity blocks, each block (or "share", in IDA parlance) is created equal. In RAID, the controller would (usually, since it's more efficient) first try to reconstruct the data by combining stripes and then if there's a checksum error it will try to fix the problem by referring to the parity blocks. In contrast, IDA doesn't have this distinction, so it picks k blocks at random (the "quorum") from the n blocks that are actually stored (eg, k=2, n=4 for a RAID6-like scheme). Like RAID, it has to detect problems in the reconstructed data and rebuild that block where possible.

So far, there doesn't appear to be much difference between RAID and IDA. The key difference lies in IDA's suitability for a distributed network storage system. Each IDA block in this sort of scheme would be stored in a completely different network node, so that instead of RAID's complicated failure modes we'd only have really two failure cases to deal with: either the network fails, or a disk or node does. What I said about investing in redundant networking capabilities in the last post applies to IDA too, and by a combination of redundant links and having the right network topology we can handle a lot of transient networking failures.

We're still going to have disk or node hardware failures. Some of these will be transient and some will be permanent. Recalling that so long as k IDA blocks (usually named "shares") are available we can still reconstruct the original data, although with reduced redundancy level (n-1/k). One of the beauties of the IDA system is that it's possible to dynamically alter the level of redundancy so that if we detect that we now only have n-1 shares in total, we can still reconstruct the data and then regenerate a single new share to bring us back up to the n/k redundancy level. If the error was transient, then there's no problem. We can keep the new share, but the redundancy level for that block is now n+1/k (ie, we have n+1 shares, but we only need k of them to reconstruct the original data).

Alternatively, if we later decide that we want to increase the redundancy level of a collection of files as a whole, all we need to do is simply create new shares for each block (*note). Or we can assign different redundancy levels to different collections of files, based on how important they are. That's your backup problem solved right there, providing you have the storage. If you wanted to achieve something similar with RAID, you'd end up having to build up completely new RAID arrays (and copy or rebuild the files in situ) and make sure that each disk has enough storage for future requirements. Managing a heterogeneous collection of RAID arrays like this (each with differing disk sizes and redundancy characteristics) would be a nightmare. In contrast, the IDA scheme scales very easily simply by adding more networked disk nodes and changing quota characteristics. In fact, you can mix and match so that each node acts as storage for several different IDA collections, each with their own redundancy levels, so the amount of reconfiguration needed could actually be negligible.

Besides flexibility and simplified, orthogonal failure cases, IDA is also at least as space-efficient as the equivalent RAID system. The storage requirements of a system where all but k nodes can fail from n is simply n/k times the size of the stored data. RAID can be more inefficient due to unnecessary 100% mirroring (where IDA's "fractional" shares would work better) and because for guaranteed levels of redundancy it has to use more complex schemes than are mathematically necessary thanks to the kind of single-point and compound failures actually lowering the real reliability figures, as I mentioned in my first post.

I've made posts about IDA before (as an AC because I wrote this Perl module to implement it and I don't want to link to it from my regular user handle) and often people would complain about network latencies and so on. However, if you're talking about large storage clusters like this then you absolutely need for it to be networked. Going into the relative benefits of IDA over RAID in a networked environment is not something I'll bore you with here, though I would say that there are a few pretty interesting other features of note here:

* using multicast channels to send the file to be split across the network in its entirety, while each node calculates and stores the shares locally is pretty efficient (RAID could do this to, but disks tend to be local rather than network-based)

* readers can request >k shares in parallel to reduce read latency (simply use the first k blocks that arrive)

* the act of distributing shares also implies cryptographic security, so storage nodes could even be stored on the Internet and an attacker would need to compromise all k of them to read your files

Sorry for the long post... IDA is a bit of a fascination for me, as you can see.

*note: IDA can dynamically vary n, but if you need to vary k you need to rebuild all the shares and discard the old version.

1
0
Bronze badge

Re: Fairly easy to work out

@AC:

Interesting stuff, most informative. Unfortunately it also means using relatively proprietary technology. Thinking about it, with quadruple redundancy (4-disk RAID6 mirrored) you might actually be better of with a Hadoop/HDFS straight mirroring.

What you say about IDA sounds reasonably close to what Cleversafe used to offer as a software product, but they seem to have disappeared their free open-source offering since then.

0
0
Silver badge
Coffee/keyboard

And the point of the article is?

Slow news day?

Guilty journalist desperately trying to find something to write about?

0
2
Happy

Re: And the point of the article is?

Despondent journalist who didn't get the CES beanfeast, more likely. :)

2
0
Boffin

1 PB of storage ? Quite simple - recipe here: http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/

You'd need 8 of these "storage pods" to hold 1 PB, each weighing around 150 lb. So that's 1200 lbs - or about 600 kg for us metric types. Blog claims the cost is $7,384 per pod, so about 59k $ for 1 PB.

Probably a bit over the normal household storage budget, but not impossible if you really need that much space.

4
0
Thumb Up

Transporters?

I assume that once we all have transporters installed at home, that the data when digitised is buffered at the far and and verified prior to reconstructing an individual. You really wouldn't want a comms failure when you are only 1/2 way home!

So It seems reasonable that a full human image, including every physical feature and all memory data, would need to be stored in the destination transporter before the reconstruction begins, and this could I'm sure take several patebytes, especially if you have one of the those 6-person transporters in your home.

0
0
Thumb Down

Re: Transporters?

But what happens if you transport a 1 petabyte disk array home? Does the data survive?

1
0
Happy

Re: Transporters?

If your data was recorded every time you transported, could you then go back to an earlier version? Say - one that doesn't have this massive hangover? Or a backup of your wife before you upset her? ;)

1
0
Anonymous Coward

How cheap will flash (or son of flash) get?

If we imagine tlc flash being churned out by the hectare in a few years time on amortised lines, a PB of the stuff wouldn't weigh much or use nearly as much juice as the rotating rust solution. Cost to buy - I don't know, but might be less than one thinks?

0
0
Unhappy

Yawn

Please can we stop with this whole "spinning rust" thing? It's infecting Trevor Pott's articles too, and is a little bit too smug for my liking ("ooh, look at me - I know how things work"), not to mention completely incorrect - these days most magnetic medium on the platters of HDDs is a cobalt alloy, most definately not iron oxide...

15
0
Bronze badge
Stop

Re: Yawn

+1

There's too much of this shite creeping into articles on the Register.

Redmond instead of Microsoft, Cupertino instead of Apple. Aren't we clever knowing where a company is headquartered? No, you're not.

Then there's 'bird' instead of satellite. It's not big and it's not clever. It's lazy cliched journalism which detracts from the reporting.

1
1
Pie

No one has mentioned the cloud yet....

Amazon would do you for around $65,000 a month, upload is free, you may need a fat pipe to do it, but at least you wouldn't be charged for space you were not using.

0
2
Holmes

Answers

Dear Martin, to answer your questions:

"But will we ever need a petabyte of personal storage?"

Yes, I need at least 2.5 petabytes so I can take a backup copy of my brain.

"How many copies of EastEnders does the world need to be stored on a locally spinning drive?"

None. Not one. Delete them all, everywhere, and make the world a slightly happier place.

7
0
Bronze badge

Re: EastEnders

Beat me to it. Zero is the correct answer.

1
0
Mushroom

Re: EastEnders

could we have a negative amount of Eastenders ? A sort of anti-matter Eastenders, which when it meets the Eastenders we all know and hate , vaporises , and with it anhilates the collective conciousness that such drivel ever existed ?

now that WOULD be good.

0
0
Silver badge
Pint

I've bought one already...

I've bought one already, but it turned out to be only 888.1784197 TB due to the usual 1000/1024 marketing confusion.

"First, you'll need a fridge..."

Fail - I think (assuming that you intend to install the 1.0 PB array inside the fridge for cooling). A typical "fridge" would probably be overpowered by (on the order of) ~200 watts heat source inside; leaving only the fantastic insulation to melt your expensive array. You'd have much better cooling if you left the fridge door open.

0
0
Anonymous Coward

1 whole PB all for me..... Hmmmmm...

1PB - all to myself? Well - my photographic collection is getting pretty big now.

To add fuel to the anti-cloud group - a few years ago I began to dabble with cloudy stuff - being a bit suspicious of it but it was all new then - so found Humyo and got a cheapskate freebe 2GB account to test it with. Being paranoid about security, I didn't want my data stored in another country whose data laws I don't trust, and Hummy were based in a former vault of the Bank of England. Can't get more British than that! Then they were bought by Trend Micro. Who moved it to Gemany. Then sent me an email saying freebee accounts were toast and if ya want to use us ya gotta pay! So I lost all my data as the account was deleted cos I didn't wanna pay. I'm not anti-pay btw - I just want my data in my country. But whilst testing, the upload speeds were appalling, I had to leave it uploading over night and download wasn't much better. So I personally prefer personal storage. So what they say about contingency plans in case something happens and so forth is a real concern.

1
0
Bronze badge

Re: 1 whole PB all for me..... Hmmmmm...

> > it seems madness to me that a decent proportion of the world’s storage is storing redundant copies of the same content.

> > How many copies of EastEnders does the world need to be stored on a locally spinning drive?

Regardless of how much data you need to store, it doesn't all need to be on a spinning drive. You don't keep a collection of DVDs in continual motion. The whole hypothesis was silly.

0
0

Page:

This topic is closed for new posts.