Reds are OK for any size NAS
I've been running 16X 3TB Reds in my home server for nearly 2 years now without any trouble whatsoever.
I'll probably start swapping to 6TB drives when they eventually start dying.
In a move that will get well-heeled Drobo owners salivating WD has announced a 6TB Red drive costing just $299, setting the industry a new high water mark in the areal density stakes. This is a 5-platter drive, with 1.2TB/platter areal density, intended for NAS use, and coming in both 5TB and 6TB versions. It spins, we …
Ben, I've got a pair of 2TB reds in my DS214+, they seem to work fine. Fine enough to delivery porn to my desktop at gigabit wire speeds, anyway, and that's good enough for me.
Performance on the DS214+ is fine for my current needs - I may upgrade from 2TB to 5/6TB later in the year, and then replace the NAS with something beefier (and sort out a link aggregating switch etc) - probably from Synology again - that supports SSD caching, and then I should be set until 10gbe becomes a desktop standard.
Mind you this is all dependant on me, you know, having money. Which is short at the moment...
It should be noted that Link aggregation (LAG, Portchannel, etc.) does not usually increase speed between two hosts because the decision on which bundle member is used is usually based on hash of L3 (IP) or L2 (MAC, not good in routed network) addresses of the peers. So it would generally be beneficial only when multiple clients are transferring data at the same time. And of course for redundancy.
You did not say you expect this but I see often the misconception that a single client would get faster speed with a LAG so I wanted to make it clear.
There are other ways to load-balance such as round-robin but those are usually not used because it may cause packets arriving out of order. I am not sure if that happens often on a simple network with a single switch but I'd still avoid it.
TBH it's a tickbox I'd like to enable to make myself seem more impressive than I actually am to people on the internet. :-)
I think what I mean in NIC teaming (IE getting 2/4GB links talking at the same time) so that iSCSI stuff has more bandwidth for VMing on a 'thin client' VMWare/KVM/Whatever server.
I can't really justify 10Gb till I get a proper virtualisation environment set up - which needs money, which as noted, I ain't got at the moment - although hopefully an interview I have on Friday may help with that.
I suspect when SSD cost/capacity starts to get closer to spinning rust, we'll see consumer 10Gb gear drop like a stone in price as more SMBs start using it, chipsets design starts ramping up, volumes increase, etc.
Until then, I'll stick to CAT5e methinks. It's a tad more affordable and I don't need long runs...
"I'm thinking of just saying "fuck it" and upgrading to 10 GigE at home"
You're more well-heeled than most of us then. Even low end 10GbaseT interfaces are £ouch! and switches with more than 1 10Gb SFP+ interface are £ouch!ouch! (let alone ones with 10GbaseT interfaces)
"Is there a reason QNAP and Synology or another manufacturer's NAS owners wouldn't be interested in these drives?"
A) Migration is a bitch.
B) 6TB drives cost $virgins, so the chances you'll just march on out and buy 8 shiny new ones to refurb your extant Synology are basically zilch for a couple years yet.
Though this does mean the 3TB disks should start hitting "sweet spot" pricing, displacing the 2TB drives...
why do you need 10gb? I run a small business with a two host cluster. I use HBA SAS as an interlink for my CSV. Much cheaper than 10GB and since I only have 2 hosts then I can still have failover. In time I can add other cards so that I can add other hosts, still it is far cheaper than a 10gb solution.
Old servers with buckets of bays are great for openfiler too (or readynas etc).
I'd say it has to do with superposition of vibration modes and resonance you get in an enclosure and getting (and maintaining) head over the track as well as meeting average seek times they claim (ie getting lock...).
The more vibration the slower performance due to rereads etc. it has followup impact on buffers/caches etc.
In other words it makes it slower under these conditions. And in some particularly badly engineered enclosures, it may shorten lifetime.
With these disks, 1 petabyte would cost $50830. The average person (with savings) can now afford 1 petabyte of storage. Several petabytes if they have a house to sell.
An Exabyte would still set you back 1024 times that, about $52 million. Moore's law says that will fall back to about $50,000 by 2034.
20 years after that, the average person will be able to afford 1 zettabyte of storage, more than all of the data in the world today.
Given the current economic climate, I'd say that the average person doesn't have savings.
Average assets of UK adult March 2014 = £147,000 (source: AOL), of which £20,000 savings
Also, I said "the average person (with savings)" and not "the average person". Hope this helps.
So the average person (with savings) can't afford $50,000 of storage? Unless £1=$2.5
The post was a wild conjecture on the next 40 years' storage prices, designed primarily for amusement. The figures in it are hugely approximated and not designed as financial advice. But yes, I was aware of the small disparity. Welsh football pitches.
"Average assets of UK adult March 2014 = £147,000 (source: AOL), of which £20,000 savings"
Yep and there's quite a few people in the UK with hundreds of millions in savings (they probably use cash or 'cash equivalents' in the metric), wonder how this affects that "average"? What you want is the median assets. Always. The mean is then useful for comparison to show the skew. This is irrespective of with savings or without savings as it then gives an idea of the inequality we all know to be present.
Around 12 years ago I had a dream of a crazy RAID setup at home that would give me 2TB of storage. I'm going loosely off memory here, but I think it would have been around 8-10 250 Gb drives.
The advantage it offered me was to allow me to erm... record many more TV shows from my TV Capture card.
I thought about the prospect of spending around £1,500 on the exciting and exotic setup before realising that actually, I really didn't need that much data storage and it was a really expensive way of saving TV shows. I also reasoned that it wouldn't be all that long before that sort of capacity was available in one drive.
I'm glad I spent my money in more useful ways now.
@Alan Brown One thing that has failed to track Moore's law is network speeds, I think. It took roughly 20 years to go from 10 mb/s to 1000mb/s, an increase of only a hundred fold. Over 20 years, Moore's law should increase a quantity 1024 times, very roughly.
All of which has not made backing up these large disks very easy.
I look forward to hearing about it!
I only have 4x2TB RE4-GP... I like that idea though! Do you have heat problems?
As for migration, using linux MD and RAID 1(UUUU) and RAID6(UUUU), since the new drive is so much larger for previous upgrades, I have always put full copies at the "end" of the new disks, so that they get a stress test, as well as a way of keeping copies of the data. Of course, adjust the encryption level of the "temp" to accommodate the highest level of your data...
Failing a drive on purpose is always a bit nerve racking. So I hook up the external SATA (!) to give the drive the dd if=/dev/zero of=/dev/sdN bs=1M and pre-write the entire drive, generate the bulk copies, and then fail one drive in the dock, change for new drive, then allow MD sync. This can take a while...
So far , Linux MD has been very good.
P.
I did that for my father... Buy him a new PC with a 20Gb drive, inserting the old one of 10Gb as backup. In the end he had a machine with a 160Gb, a 80Gb, a 40Gb, and a 20Gb drive. All working perfectly.
With this one, I hope he never finds out it exists before getting rid of his old rig.
I do the same on a house scale. Old drives from the house main server RAID set are first re-purposed as MAID for media, then as backups, then as desktops (going all the way back to the days when the server was a K6 with 2 40GB Maxtors).
The current set is due for a change soon. I will probably not put WD though (my luck with them has been terrible).
So you get to move all your data 3 yrs from now into yet again, something new....Yum !
20 days ago= in a Reg article: New Research - Flash is Dead, it was made apparent that no matter what you use, your data degrades in 3 to 5 years... What to do ? their white paper suggested a Sci-Fi solution that might happen someday...
IMHO= Until then embrace this new thing, realizing you get to move stuff continuously until no longer needed...
caveiat= the old IBM 386 SX-20 desktop boxes running MS DOS-5 had no way to actually backup data... needed larger hard drives to continue daily work...many fixed B/U problem.. just keep buying new machines as old ones filled up n let Corporate IT worry about data retention...( my solution was to build a greybox workstation using a gaming motherboard n 3 CD Rom storage drives )RS.