Current flash memory is cack, that's why.
When memristor tech appears it will make flash SSDs seem like floppy drives.
No matter how much storage space you get, and no matter how much you free up later on, it always gets stuffed to the gills. I am, of course, talking about my attic... and the garage, and indeed the garden shed. Many reasons for this have been mooted, including the need to do something with my kids' childish belongings as they …
Help yourself to an up vote. Presumably your down voters didn't actually go off and read that article, or if they did they're too dumb to understand the consequences of it. Or maybe they're just cross that they've spent a lot of money on something that's going to be very obsolete pretty soon! Me? I plumped for Seagate's hybrid drives for the cheap 'n' happy tradeoff as an interim measure for memristor rolls into town.
Memristor is coming and it is going to make almost all current storage technology obsolete big time, including DDR<n> RAM.
Time to buy HP shares? Quite possibly!
The problem there is that these new technologies are showing up all the time, and inevitably there are always a bunch of new caveats that come with them. Only once we have our hands on early devices and get to play with them will we be able to say MRAM is replacing anything... and that assumes that they can ever get it commercialized and QA passable (many techs get early hype and then never appear because the cost didn't make sense or it couldn't be mass produce right, etc).
So far none of the new NVRAMs similar to MRAM have lit the world on fire. I believe TI's chips using FRAM (similar in end result to memristor RAM) are flaky and often under-perform in both their power envelope and performance specs.
@Arthur 1, good point, but in this case there are a number of indicators that are interesting.
1) Of all HPs research efforts this is about the only blue skies thing they kept. I know HP aren't what they once were, but they believe in it enough to keep going despite it having nothing to do with their existing core product line.
2) HP have reportedly done a deal with Hynix, and the word is that the hold up is now business restructuring not technological. Understandable, something like this really is a massive game changer and you don't want to be investing in a FLASH plant at the same time. I've not heard in recent times of Hynix bigging up a new FLASH product or design.
3) Even if HP has to back off from their published performance by a factor of 10 it's still a game changer (alright, maybe DRAM is safe, but nothing else is). Passing QA with that margin in hand sounds quite plausible to me.
4) HP have been doing this for years now. About time they got something out of it!
Or maybe they are just old greybeards who have been hearing about PRAM and holo-drives for years yet they never seem to end up on the shelves?
As far as SSD? The problem is everyone assume it can always take the place of HDD and that's wrong, like any tool it has its uses and its downsides. Here is what I tell my customers if they ask about SSDs: If it is going in a mobile device that the data is NOT mission critical, or if you keep a rigorous religiously adhered to backup schedule? it makes sense there, lower power but unless you drop the device often the head parking on the new HDDs mean it'll be for the speed and lower power more than for protection. if its just gonna have the OS and again backups made often? Makes sense here, slap a 64gb or 128Gb and keep your data on the HDD. if it is something mission critical, where it going down is really gonna hurt? it does NOT make sense there, because i've seen too many SSDs die without warning and even if you have a drive waiting to take over there is still the time required to place the image on the new drive and to swap them out.
So it all comes down to what you are using it for, but buying anything over 128Gb right now is frankly nuts because watching the forums as well as my customers it seems like you are only gonna get about 3 years no matter what you buy, and its not the cells, its the controller. Just FYI but OCZ seems to be the worst, Intel and Samsung the best, but in any case about 3 years tops before the controller fails.
You should have bought either a Gigabyte U2442N or a Sony S13A.
Both of these weigh about 1.6 Kg and can be equipped with an SSD as well as an HDD. So, like most people, you can put your software on the SSD and store your data on the HDD. I think you can get 1.5 TB drives with less than 7.5 mm height now.
Then you clone the SSD, after you installed and updated Windoze, and all your software. SSD's do break, so its much easier to have a clone to restore onto the replacement, if it becomes necessary.
"if a hard disk starts going glitchy, I can usually back stuff off it before it's too late."
Usually isn't good enough (I've had HDs fail completely). That's why you have to have a regul;ar backup strategy, and that's equally applicable to SSD and SSD. It's also untrue that SSD's necessarily fail all in one go just as it's untrue that hard drives always fail softly.
Get that backup stgrategy sorted out, and get into the habit of running it daily. Incremental changes don't usually take too long to apply. Even if you run mirrored drives (as I do), it doesn't invalidate the need for a reliable backup as there it's always possible for catastrophic software, user or hardware errors to wreck your data.
No one is saying it's a backup strategy, but yes it's true that HDs tend (granted, not always) to signal their failure (SMART indicators for a start, physical signs of death like clicking) allowing you to get most data off it if you haven't already and total failure of disks still generally allows for some element of recovery at specialist labs (again, not cheap, but possible).
SSDs on the other hand tend *not* to signal their failure in advance and the options for recovery once they do are a lot more limited.
"Didn't Google publish a study of HDD reliability (based on the results from their data centers), which came to the conclusion that SMART indicators were mostly worthless for predicting failures?"
The results of analysis of a bogglingly large number of drives were pretty much "SMART can tell you that a drive has failed, but it's pretty much rubbish at telling you that a drive is going to fail."
The same report concluded that development of even a single bad sector is a pretty good sign that the drive is getting ready to check out.
"The same report concluded that development of even a single bad sector is a pretty good sign that the drive is getting ready to check out."
There's a reason for that. SSDs copied the HDD redundancy scheme. In both cases manufacturers keep a fair bit of "spare space": for SSDs that's unused pages and for HDDs it's spare tracks. When you hit a problem reading a sector where you have to try and read it more than once you map it to a spare sector and mark the old one as bad. At no point does the user know you've done that, it's all done under the covers seamlessly.
Now that you understand that you can see the "why" behind Google's result: by the time a user sees a sector failing the drive has run out of spares, which means that a pretty fair fraction of the drive area has failed for some reason. Those reasons are usually cascade failures (heat related wear in an SSD, TA contamination for HDDs, etc). It's your hint to go out and replace the drive folks.
You have the SMART value "reallocated sector count" which tells you the number of remaps performed so far. You also have the self-tests which you should run regularly via smartd or whatever.
Google says this:
"Our results confirm the findings of previous smaller population studies that suggest that some of the SMART parameters are well-correlated with higher failure probabilities. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in reallocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabilities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever."
So it's pretty useful.
Ceterum censeo, not being able to get SMART data from USB-attached disks, varying/obscure/properietary interpretation and differences in commands and reporting between ATA and SCSI is frankly retarded. Disk industry is controlled by lazy jerks who can't into usable software and who should be visited by Vlad Tepes for some attitude adjustment.
I upvoted this because it doesn't really matter what you use for storage it can and will go wrong at some stage, usually a critical moment. I suffered a HDD crash and got zilch data back from the disc, even the pro's couldn't retrieve anything.
Having a backup strategy and using it is the only sensible course of action, anything else is just foolish and asking for trouble..
Ive never had a decent SMART warning. Ive only seen SMART warnings once ive already found a drive fault or the RAID array has emailed me about an uncorrectable error THEN I get a SMART warning. I too have had HDDs that have merely had a failed head or bunch of bad sectors - spinrite has gotten the disk back to a "get photos from dads drive". In the workplace I dont even try, pull drive out put new in and leave it as the new hot spare.
Data that isnt backed up is data you do not want.
This article is timely, my iMac's internal hard drive is currently wheezing and clicking as I try to get everything off it.
Yes, I have a Time Machine backup. It's from 3 weeks ago and I've done a bit since then. There are a handful of excluded directories including the Download directory which somehow I've managed to start using as a temporary area for a fair few important files.
I've just got a NAS box with RAID and was going to get around to put everything on it this weekend.
You see, whatever you do, you're screwed.
By the way, six years for a hard drive isn't much, is it? I've still got a 10 year old laptop which works. I considered SSD hoping perhaps they'd be more resistant to the iMac's toasty warm innards but they're still fecking expensive.
No, toasty warm is actually pretty deadly to NAND. When you're storing 200 electrons per cell in a 125 C environment you're lucky to keep good data for a month or so in the latest NAND technologies. There's a reason we've got strong ECC schemes to make flash more reliable, and the next step up will be LDPC codes, which are coming soon.
That's why Time Machine runs every hour automatically you dolt! To stop complete morons like you losing data.
If you don't connect your backup drive it can't do the backup, and excluding a directory that you are now knowingly using for storage is reaching deep into Darwin territory.
I bet you think RAID is backup as well, don't you?
If you were my employee you wouldn't be for long. YOU are a walking disaster area and should not be allowed near a computer!
Once every hour is too much to fill a disk with a bunch of hardlinks and one or two changed files. Can I configure Time Machine run to once every day or some other frequency that better suits my needs? No, because Saint Jobs didn't want to. Hence I ran it manually when I remembered. Lesson: Dumbing backup software down to a big switch is not a good design decision.
Yes, using the Download folder was silly, however I got the new files off there, interestingly enough using the UNIX command line as letting Finder near it sent it into fits. I could do this because I picked up on the warning signs that the HD was on its way out.
No, I don't think RAID is backup storage, I do know what the initials stand for.
I'm a walking disaster area that's managed to keep personal files for about 25 years, converting formats from one to another as I go so the data is still readable. OTOH if I were your employee I'd be looking for another job anyway because a) I wouldn't be happy if this is how you motivate your employees and b) you're probably running your precious little company or profit centre into the ground while blaming everyone else underneath you anyway.
SSDs can brick or run out of write endurance. Of the two bricking - at least historically - seemed to be the more common real issue, although write endurance was - and remains - a major concern. That said, the concern about write endurance is, on average, overblown and misunderstood.
In current gen SSDs, there is excess capacity (i.e. you don't see it but that drive probably has ~15% or so more capacity in it than is on the sticker) to spread those writes around. Wear leveling (i.e. moving write-heavy data around to spots of flash that have been written to less) is effective, especially when paired with that extra hidden capacity, in getting a useful lifespan out of a drive. TRIM, and garbage collection at the firmware level makes the degrading performance concern mostly a non-issue. End result: as long as you're not doing something ridiculous like back-ending a 24x7 transaction heavy server with a consumer grade SSD, your SSD should last for at least its warranty period from a write endurance standpoint.
Also, with a write endurance failure of an SSD data can still be read, you just can't write anymore.
The main concern in the article seems to be that SSDs catastrophically fail (brick) more often than HDDs. I would be interested in seeing any data to that effect *with current gen drives*. That certainly seemed - anecdotally - to be the case a few years ago... but I'm not sure if that is still the case. Rummaging around the NewEgg reviews, shopping for storage, the situation seems to have improved greatly versus prior generations. That's not a proper judgement of the situation of course, but in lieu of better information... the assertion that current gen SSDs regularly blow up and are less reliable than HDDs doesn't seem to be as valid, at least, as it once was.
It seems to me attitudes to SSDs fall into roughly two camps - the big iron or high reliability folks who avoid them with a bargepole and the enthusiasts with little to lose who insist they must be better regardless of the question, and cite manufacturers claims as if they were solid incontrovertible evidence.
4½ years ago I installed 12 32GB SSDs into regular desktop machines. Looking back at my posting history I see I cited those here back in January, when nine of them were dead. Now that figure has reached 100%. With mechanical hard drives you would have been unlucky to lose two in that time. The manufacturers were making just the same reliability claims back when those were installed as they are today. Another smaller experiment started around two years ago isn't looking much better at this point.
Like it or not there is 50 years experience of HDDs in the industry and for those whose data matters experimentation based on technology that is at best unproven and at worst highly dubious simply isn't an option. Bare in mind that even one strong advocate of SSDs has to admit to their piss-poor reliability. Personally I'm with the first respondent - in the long term memristor based storage is probably the better bet. Whatever replaces ultimately replaces HDDs as the default storage option it won't be flash - in twenty year's time flash will be akin to the Zip drive today - a stopgap product for some subset of users that ultimately led nowhere.
Re SSD Camps: The two you mention are just the two most vocal - there are huge amounts of enterprises customers using SSDs just fine, they're simply not telling you (probably because they're working as expected, and there's nothing to tell.)
SSDs 4.5 years ago were crap - they were using flash and controllers originally designed for USB flash drives, which were designed with very different use patterns and lifetimes than a desktop drive. Things have come a long way since then - I've got hundreds of consumer-grade SSDs with 30k hours of runtime on high-write ratio workloads, and while some have failed, the AFR is significantly lower than my enterprise-grade hard drives (which mostly sit idle) over the same time span. And those SSDs are many generations old - every generation the drives become significantly more durable, to the point where I haven't seen a single failure in the latest generation drives over a cumulative 600 years runtime.
You just have to know what brand (and model) to buy, which usually means ignoring the benchmark screamers.
The problem about SMART is that it is a series of numbers. It's not just a single yes/no value. It's much like nutritional information. You actually have to read and understand the whole thing. You can't just depend a single NuVal or some tool that tries to approximate this sort of thing.
I have never lost data to drive failures. This even includes a notorious batch of Seagate 1.5TB drives. However, I diligently pay attention to my SMART numbers and I'm aggressive about replacing drives when they start to show signs of trouble.
One day my SSD boot drive will just suddenly fail. THAT will be my first and only "warning".
At work where I monitor over 100 hard drives 24/7 for at least the last 5 years I nearly always know when a drive will fail long enough to replace it. Although this is not SMART PASS / FAIL. Of the 75 or so drives we have RMA'd ( I test them thoroughly before an RMA - 4 passes of badblocks minimum) only 1 single drive had indicated a SMART FAIL status. Even drives where large portions of the drive were unreadable. Instead of the full drive SMART PASS / FAIL I look at specific SMART parameters like Reallocated sectors, Current_Pending_Sector, Offline_Uncorrectable ...
OK, I'm starting to get that "cold sweat" feeling here...
We have a Synology NAS box (a 1-bay model, in case you wondered), which we bought back in 2010. About a week ago the 2TB drive in it started making occasional "click" noises - it's a Samsung "green" disk, which is supposed to power down when it's not in use, but I haven't heard it do this in some time (should check the Synology OS settings).
I try my best to keep important files mirrored elsewhere, but if those "clicks" are a sign that the drive could be failing, I'd better start making sure that nothing I want to lose is only stored on the NAS...
That's exactly one thing on an HDD that can give you a warning, the noise.
I've shut down and mirrored two HDD's because of 'oddness', that you can hear (cclicking, rpm winding up and down when it shouldn't really), and, wouldn't you know.. a few weeks later, a loud scrrrrech, and a sudden BSOD later, the HDD refused to spin up after a reboot.
One tip, if you DO hear that sort of noise, DON'T DO A WINDOWS DISKCHECK.. all that will do is bounce your heads all over the good bits of the platter; Take the disk out, put it in as a slave on another machine, and just mirror it to another HDD.. then you can do your fault check.. which will probably destroy everything anyany.
Luckily, these (it happened twice over 10 years), were home PC's, with nothing but banal crap, re-installable games, personal e-mails containing nothing that I couldn't recover from the imap, and personal docs and pics that were always backed up..
So, in a sense, HDD's can and do give more warning of a problem, but you can't rely on it, and you have to be physicaly present to hear said telltales.. HD drives, in their death throes, failing in a server room, with no one present, would need to be striped across multiple discs to prevent data loss.. oh wait.. that's exactly what happens..
SSD will never replace HDD RAIDS for secure ('secure' in this case meaning nonvolatile) storage.
And if you're at home, infront of your PC, you WILL have a better chance of physicaly hearing a problem before it becomes terminal / expensive.
But still, back yo' shit up. Ain't nobody needs that sort of data loss round these here parts..
Was actually just discussing this sort of thing with a colleague the other day.
I can recall the first 1GB hard drive coming into the uni labs where I was working at the time (~15 years ago, and OK, for a student definition of working). And now I have 16x that hanging off my keyring, let alone what's in all the other kit around the office.
I get that Apple nowadays is a hype machine, and it's a pretty sad state. But, be fair to Apple II, the hype was largely deserved and it was the best machine of its vintage as far as consumer targeted stuff went. Choice of monitors, massive expansion slots, colour "graphics". The engineering and thought on that machine was truly high art, and Woz kept Jobs in check so it was a function first machine (contrast Apple III).
I still have an original IBM PC 5MB full height drive sitting in my desk drawer. I can't remember what it cost but it was likely in excess of $1000. My biggest machine at home right now has over 12tb of internal+external storage. If I haven't messed up some 0's somewhere then that amount of of storage using these old 5MB drive would have cost in excess of $2 billion, and probably nearly as much just to house and power them up :)
My first one was a 60mb gvp impact 2 scsi drive that slotted into the side of my Amiga 500. I seem to recall that it had extra memory slots and cost more than the Amiga itself. It still worked when I dug it out a few years ago, which isnt bad considering its about 20 years old and I used to hammer it for hours back in the day.
I also have a few old laptops from the mid 90`s that still boot fine with their origional hd`s.
On the other hand, the ssd in my Acer Aspire One is dying rapidly after about 3 years. It always had awful write performance, but now it seems to randomly loose data and I fear its not long for this earth.
I've just been handed a 1990 vintage neuronal stimulator/amplifier with a duff PSU in a 19" modular eurorack.
2 minutes with a screwdriver and a voltmeter and I'm 75% certain there's a partially blown bridge rectifier. 60p replacement part, 30 minutes with a screwdriver and a soldering iron, job done.
Last week, a 2007 model, with no sign of power. The PSU is in an epoxy sealed box seemingly fusion welded onto the motherboard which forms an integral part of the custom moulded 19" housing. Replacement cost? Around £1600.
So I agree about data being gone in a flash.
I'm just grabbing my lab coat to go tell the nice lady that her box is fixed.
in the *cough* *cough* 20 something years since I had my first hard drive capable PC - Amstrrad 1640.. that smoked the competition with it's 40Mb drive. I've had exactly 2 personal hard drive failures and just 1 single drive fail in a server room environment (the raid controllers in the server room, there's another story!)
1 of those drives was poxy portable hard drive that I might have been a teensy bit careless with and trodden on as well.
Yet solid state fails with alarming frequency, the CF Card in Apple Newton being the first, but SD Cards for my cameras fail frequently, as did the 8Gb solid state drive in my Acer Aspire One, luckily I'd just bought a 60Gb replacement (non ssd) drive for it when it did. 60Gb.. not much I know, but with everything fully loaded on to it, I've still got a spare 30Gb free.
What do you people do with your machines to consume so much space so quickly?
Three failures? Jesus, I've had a good half dozen over 15 years, out of maybe 30-odd I've had in use. Some 2.5", some 3.5".
Actually it's that sort of failure rate that pushed me towards SSD - I reasoned it can't be any worse, and because of it I'm already backing up religiously. My server is on RAID-1 SSD and my laptop has a beautiful, beautiful 640MB Intel SSD. My god, I could kiss it. Best IT purchase I've ever made.
... and to put it into context, where I am working, we lose at least 2-3 drives a month.
But I suppose that is because the systems I look after have more than 8000 drives in them, all either RAIDed or mirrored, and half of those will be spun down for the last time in the next month or so.
Mind you, you begin to get a bit worried about data integrity when you lose a second disk in an 8+2 RAID5 set within a few days (the organisation has a policy of no disk replacements done at the weekend, and have had two disks failing in a the same set over one weekend on more than one occasion).
On the other hand, the first system for which I was a sysadmin had 32MB CDC SMD removable pack disk drives which were the size of a desk pedestal, and the first machine I had a login for had a couple of 2.5MB RK05 removable disk cartridge drives, and that served a community of about 30 people!
>> "What do you people do with your machines to consume so much space so quickly?"
> You're seriously think we'll believe you haven't discovered porn yet?
Never mind porn. Just consider digital media. If you were actually a serious user of services like Amazon or iTunes you would eventually accumulate large amounts of stuff. If I were to "buy" something from iTunes, I would certainly want a local copy.
Plus there is digital photography and video both of which can generate large collections of files. Neither of these is a terribly "geeky" undertaking. They are both the domain of grannies.
So the idea that no one needs ample storage space is beyond absurd.
Even at home, in 20 years I've had half a dozen HD drive failures.
No problems yet with SSDs
It isn't a fair comparison to speak of SSDs and memory cards in the same breath - memory cards are not expected to be so durable and I wouldn't use them for what I intended to be permanent storage.
I find HD drive lifetime had a lot to do with brands - Hitachi drives weren't nicknamed deathstars just because someone liked star wars, y'know
Why do we need so much space? First, as everyone says, if you have a terabyte attached storage, it is in effect a quarter or so really by the time you have accounted for backups
In the good old days text documents didn't take up much space - a floppy drive could hold a decent sized book.
Now you have graphics, images, font effects, etc, space is chewed up.
Then music, video, photos - what do you do with your computer that you don't even know what uses space?
All drives fail and usually fail in the most inconvenient (or even disastrous) way. The only solution is proper backup and, although I do take local backups to another HDD, there is a lot to be said for a reliable cloud backup solution.
I use Crashplan and I have been very happy with it. It's reasonably cheap but operates quietly in the background with no user intervention (which is particularly useful for the rest of my family who resolutely refused to back anything up no matter how many times I warned them). It would be a pain to try to restore 1.5TB over the net but I can prioritise and at least I know it's highly unlikely to go wrong at the same time as my HDD and local backup.
@lotus49 - Agreed. Crashplan for me has proven pretty good for backing up and restoration from local and remote backups. I backup the laptop both to my local NAS and to their server. The local datastore makes for really speedy restores, but should something catastrophic happen, having the offsite backups is good too. Its backing up changed files every 15 minutes, so it would be pretty difficult to lose very much data. I don't notice it running, it just gets the job done non-intrusively.
Had my laptop hard drive go suddenly *POOF* earlier this year. Replaced it, got OS going, installed Crashplan, restored, and kept on going without too much fuss. Its also pretty cheap for the "family plan" to backup every device in the house regardless of OS (Win/Mac/Linux). Pretty happy that it does what it says on the tin since not every product (particularly Cloudy ones) do that.
Based on my recent experience with SSD failure (father in law's went PHFFFFT with no warning at all), along with hearing horror stories from colleagues on the same, something like Crashplan that is doing "cloudy" backups throughout the day is pretty much mandatory with an SSD.
Well done Alistair! It's never cool to play golf, not when even you reach 50. I wouldn't be seen dead playing golf! Surely I can go hangliding or mountain biking when I'm in hell. Oh wait, will hell have compulsory golf:-(
Of course you should still be spelling 'disc' with a 'c'. 'Disk' is a contraction of 'diskette' an IBM neologism for floppy discs. Unless you're writing about floppies, there's no justification for the 'k'.
I can usually scrape most of the bits off a "dead" HDD or indeed FDD with a bit of patience; wonky flash device, your data is probably toast, at least this side of getting it to a specialist lab and paying serious beer tokens.
Personally, I've had a few hard drives die on me by now - virtually no data lost, between a little bit of RAID and a lot of backing up across the LAN. If my laptop filesystem gets eaten by gremlins, a reformat, reinstall and restore just takes a few hours to be back where I was earlier that day, restoring over my LAN; if the backup drive there has gone too, I can pull it over the Internet from the remote backup overnight. Sadly, the habit doesn't seem to be spreading!
We've been selling htese for a while now, especially on our Video editing suits. I resisted SSD for a long time as we've had run ins with the early IDE/CF ones and PC/104 drives before, not really that fast and last months under moderate use even with logging etc disabled. MTBF was too low as was overall life expectancy.
We push out a fair number now and have done for over 18 month, we've had one failure of a cheapie Team drive that even Team were at a loss to explain (decided it just wanted to be a 1Gb drive rather than 120). And maybe a handful of controlled failures on our test rigs. So...
Failures dont seem to be any worse in terms of numbers than hard disks. Correcting for skew as we push out more HDs, they are on a par, if not more reliable than 2.5" drives (they do fail more often than 3.5"s in our experience)
Failure modes are identical to HDs, these drives support SMART (not sure why people think they dont) and will give warning of an impending failure. Block remapping does the same as bad block reallocation on an physical disk so the warnings are there. The higher end drives will also fail read only long before they die totally. We've nuked a total of one test drive and that was days old (Bad drive to start). Yes they dont clunk, click screech and wirr, but if its doing that it was too late anyway.
If you are putting all your eggs in one basket, then you'll loose data, in this respect thats poor planning and the same as a hard drive. Using an SSD makes you no more or less of an idiot if you dont backup.
Some things will prematureley age a drive, using them as swap, cache or log drives would be the biggest culprits, read life wise they should well outlast any mechanical disk. We can kill an SSD in about 3 days, I've managed to nuke hard drives in under a day with one Samsung making its heads cease to exist and a Hitachi shattering a platter.
Cloud storage is great, until you need it. So your server is backed up to the cloud ok, cool. So your office burns down, how long is that rebuild going to take in real terms? Assuming you cant get help and have to wait for a new broadband connection thats a long wait, if you can get help a few 100Gb is going to take forever. A real backup? I'll have your server back in an hour or two at most. The cloud is still an answer looking for a question and a good place to store all your cat pics. Its not a business grade backup solution and wont be while most users connect to it by 'broadband over damp string' (tm)
Our standard setup in the editing systems is a properly sized SSD with NTFS junctions used to bring all the data to where Windows expects this. On top of this the SSD is imaged to a seperate partition on the hard drive every few days meaning that in the even of an SSD failure, the user gets told 'use F10, boot from the hard disk and we'll have another SSD out in a day' We knoe this system works, its just never been tested as one has never copped out. The Team unit was an Asterisk box that freaked.
I design controllers for both SSDs and HDDs. Failure mechanisms are typically very different.
For SSDs what kills you is the NAND wearing out, and that's a big function of how much data you have on your SSD. The problem for SSDs is that sector oriented writes in HDDs are still 512-4K bytes, while SSDs require different sized writes that are typically much bigger, although the exact size depends on NAND configuration. Since SSDs require full page erase-write cycles that means that a lot of small writes can cause page wear far beyond what you'd expect since even with wear leveling controllers you'll be writing tons and tons of new pages if you're not careful.
That same wearing of writing small blocks causing big blocks to be written and wear out gets exponentially worse as your SSD fills up. While you can push an HDD to 80+% capacity without significant penalty (just usually seek time), pushing an SSD past 50% capacity causes the controller to have write factors well above 1.0 and your SSD will wear out significantly faster. This is a real issue in SSDs that use MLC NAND because of the lower lifetime.
I tend to agree with richard7 above: a smallish SSD for OS/apps backed by an HDD for data storage and redundancy is the right way to go. I hate trusting the Cloud as it's pitifully slow if you have a lot of data to recover, and flash tends to have to many catastrophic failure mechanisms that arise without warning. I've been doing this stuff since the start of the PC era and I've only had one HDD fail without warning, but I've seen lots of SSDs fail without warning.
As drives fill up you get a couple of different things going on. First, the wear leveling starts running out of blank pages and has to start going garbage collection to try and make more compact file systems. Second, the more fragmented the file system the more writes you have to do. Thirdly, your overprovisioning starts to run out and get less efficient. If you want the more detailed version look up write amplification on Wikipedia, it's a tolerable introduction to the problem.
Most of this article leaves me with a "stating the bleeding obvious" aftertaste, but one casual throw-away phrase struck a chord: "Future Proofing", and coupling that with "laptop" crystallized a desire I have.
I'm tired of having to migrate every bloody archived document I have from my old machine to my new machine for fear (realized, sadly) that the long-term storage device(s) will fail and will not be replaceable due to the mad dash into the future. I can get spares for a vintage foreign car easier than I can get a domestic computer component spec'ed 15 years ago.
How pleasant it would be if the major devices that go to make up my laptop had interfaces that were themselves guaranteed to provide connectivity despite the details of the device itself. Switch from backlit LCD to PBP (Psychotronic Brainwave Projection)? No problem because the intefaces will cope. Hard Drive to SSD to Atomic Differentiaion Matrix? Ditto.
Yeah, I know it would be damn near impossible, but I would really like a portable Marshall Tucker Band device, where changing the individual parts due to the realities of time does not mandate a brand new system. Something like Lego, where new stuff works with the old because the peg/socket interface was sorted out at the start.
What are you talking about? SSDs have been using hard drive interfaces since day 1. You and the guy who wrote this article don't seem to be aware of this fact. But the vast majority of SSDs are in 2.5 inch hard drive enclosures and have hard drive interfaces. I bet you wouldn't be able to tell the difference between an SSD and a hard drive except that one is much faster and quieter and has less storage capacity.
I have three words for you: Windows Home Server. These three words make the backup process easy. What I do, in addition to the standard daily backup Windows Home Server does, is to use SyncToy to copy the most important files -- pictures and documents -- to Windows Home Server. It is my opinion that WHS is Microsoft's second best OS, just behind Windows 7.
You don't need a special product. Any modern OS installation can be a NAS. Backup software is cheap and plentiful. Appliances are cheap and plentiful. You don't need a special product from your OS vendor. You don't need to pretend that you live in some sort of Microsoft Walled Garden that doesn't really exist.
This is not a complicated thing but certain people like to perpetrate the mystique when it comes to this stuff.
SSD for the OS only seems to be a good idea to me, though you have to be careful about the default locations of mysql databases.
I have a host which dual-boots windows (for games) and linux (for everthing else) and SSD would be great for that. While some games could benefit from a disk speed increase, I suspect its nothing that a decent raid5 disc array couldn't sort out for me.
Apart from conjuring up some quaint memories this article is complete nonsense - not usual register standard - common sense should tell you that a mechanical device spinning at up to 15,0000 rpm is inherently much less reliable than a solid state device - hence ipads iphones and the like are remarkably resilient. Hard drives fail all the time and no they don't warn you- idiotic statement - we and our customers have experienced many catastrophic failures from hard discs. following many painful lessons i can also tell you that raid is useless as well for protection - they give a false sense of security when one drive goes another one often does as well - also backups are also quite often useless unless regularly tested for restores - most don't do this - im looking forward to the day hard-drives become obsolete
Common sense might tell you that a mechanical device spinning at umptytum RPM is less reliable than solid-state. If you're slinging that mechanical device around in your handbag, or clipping it on your arm to go for a jog, then sure.
But leave them both in an immobile PC, and bets are off. If you're not aware of manufacturers specifying expected write-cycle lifespans for flash, then you don't know about the main cause of failure of one of these. If in addition you are using the SSD as the main (or only) drive and you haven't got temporary files being put anywhere else, then you're not aware of the main reason you'll burn through those write-cycles. In short, if your only comparison is mechanical-vs-solid-state then you don't know enough to have an opinion on the subject.
Solid State DRIVE? Surely the word "drive" is derived from the fact that we had floppy disks and then a floopy disk drive that contained a motor which spun (i.e. drove) the disc? The Hard Disk Drive still had a motor which spun/drove magnetic discs.
Therefore, should it not just be called "solid stage storage" or an individual unit a "storage module"? Any better suggestions?
I've swapped out the internal optical drive of my laptop for a caddy that holds a spinning disc. I have a (only 128GB!) SSD as the boot/applications/user folder/working files drive. The spinning disc is partitioned - with one partition being a target for a scheduled daily clone from the SSD, and the other being a place to store larger files (eg media).
I think I'm having my cake, and eating it - but I'll wait for the inevitable correction...
I thought about buying an SSD for a while but i came across the same stuff said in the article and decided not to. Plus there are some advancements in hard drive tecnology such as helium being put into the drive case speeding up the disc to ridiculous speeds and increased reliability. i can't wait to see the next new tech in hard drive technology because ssd's can't really evolve much bejond this other then their controllers. and storage size.
I have this tried and tested setup for building my backup servers:
OS = 2 x 64GB SSD RAID 1 - currently OCZ Vertex 4 (5 Year Warranty)
Data = 4 x 2TB HDD RAID 10 - currently Seagate low power 5400 RPM
Using an LSI RAID card. Don't trust Marvell with a barge pole and their chipset is on the majority of motherboards these days. Infact I hate them!
While SSD had a nice drop in price this year, it is still way off what it should be and what it needs to be in order to strive.
64GB el cheapo SSD, installed in someone's laptop.
When it failed it took the laptop's HDD controller with it, yet for some reason the drive worked fine.
Put it in another machine and got the data back but can I trust it again?
Symptoms:- laptop BSOD'd with no warning when used a lot yet all tests passed OK then just failed to boot one day with "Disk Read Error"... !
Tried it with Linux boot disk and the HDD wasn't showing up, put old nonSSD back in and it also didn't show.
Tested the PATA port and nothing abnormal but simply didn't boot.
No sign of drive in BIOS, concluded that the board was shot.
Now have a nearly identical board with the same exact fault, yet the (SATA) optical interface seems fine.
Put me right off SSDs, as it cost me a lot of money to fix and the drive couldn't be returned due to it being 2 months old and off "insert unfavourable supplier here".
From the article:
"What's playing on my mind, though, is that the SSD unit in my notebook might not be easily replaceable, or even removable at all."
Uh, what notebook do you have that your SSD unit isn't easily replaceable?!?! They are even easily replaced in a MacBook Air if you have the right screwdriver to take the bottom of the case off.
"And Flash tech has a fixed lifetime, or at least decreasing performance over time, so buying lots of it doesn't necessarily prolong its usefulness if you use most of it very often."
Well, let's assume an absolute worst case scenario where your flash cells are crappy and give out after only 10,000 cycles. That means you can write 5000 terabytes to your laptop's 512 GB wear-leveled drive before it gives out. If you write 10 GB to it per day (seems like a lot), it will only last 1400 years. Are you worried about this for some other reason than uninformed drivel that people post on internet message boards?
Also, SSDs no longer slow down after you've filled them up, thanks to TRIM and/or overprovisioning. That was a problem a few years ago but not anymore.
By far the biggest point of failure with SSDs is going to be the controllers, which have certainly had their problems in the past. Please worry about the controllers instead of propagating drivel about flash write cycles.
Nice assumptions, but not real. Write amplification is a real problem, especially with a drive that's even somewhat packed. Even the best SSD controllers can't keep the write amplification below 1 at ~30% capacity. By the time you hit 80% capacity you're talking monstrous amplification factors for even relatively sequential writes.
Example: I write a 512 byte sector. In a HDD, I write the sector. Done. In an SSD I have to read/erase/write the whole page (~64K or more). That's not including any remapping that has to take place for moving the other sectors on the page.
Then there are problems with longevity (cells need to be refreshed periodically since flash really isn't a permanent storage mechanism and cell content degrade over time), etc. There's garbage collection, all that junk that has to go on in the background on an SSD that doesn't go on in an HDD.
All told, flash isn't a technology to make a long lived drive. It's fast, and it's useful in some applications, but you have to be even more paranoid about it failing and give it a lot more margin than you'd give a HDD.
My understanding is that current SSD controllers use an intermediate "journaling" method of storing many small writes so that they are written sequentially and erase as few blocks as possible until the "journal" becomes full and the writes must be flushed. This means that frequent writes to the same block (common for log files, database files, etc.) only basically require one reflash instead of one reflash for every write, i.e., a value less than 1 for write amplification. There are other techniques that reduce write amplification like compression. SandForce claims that in best case scenarios, they get 0.14. Of course, if you are just going to write millions of random blocks to random locations, I'm sure it will decrease the longevity of the drive, but that's not how most software operates.
I think I've got into the habit of assuming that storage devices are more reliable than in reality - probably because I've been relatively lucky in twenty-odd years of owning PCs.
I can't remember experiencing any hard disk failures in that time, but I *have* had at least two or three flash drives suddenly fail to work one day (thankfully with no important data on them). In my book, I treat USB flash drives like floppy disks of old - don't keep anything of value on them for long, and if I must, I try and make sure there's at least one other copy of the file somewhere.
One SSD I haven't much given thought to, is the 8GB one in my Eee 701SD netbook. I bought that machine three years ago, and it wasn't new then ("refurbished"), so it's not impossible that one day I'll start up the Eee and D'OH! (to use the vernacular). Even a replacement SSD for the 701 would probably cost £60 or up - most likely more than the machine is worth...
And the elephant in the room: our 2TB NAS. How do you back up a machine possessing more disk space than all your other computers put together? (I'm thinking "distribute the folders across the other computers", but even that isn't ideal... my brain hurts :-( )
This is something I've known for years. My old Creative Muvo 64GB MP3 player was the first device to make me stop trusting flash chips. I ran a small server from it, leaving it plugged into the computer, and within three days it became as volatile as RAM, once I unplugged it from the computer the contents disappeared!
SSD's and flash disks are self contained. The chips and controller are on the same board, so for any repair it is likely vacuum rework soldering is needed for either controller replacement or flash chip recovery using a Willem type chip programmer, that's if the chip hasn't completely disintegrated. Why do you think RAM in a computer goes bad so quickly? So many thousands of transactions (read and write operations) per minute wears them out in no time, SSD's are the same.
Hard drives are much easier to repair, as 99% of the time it's either the control board that's playing up and can be replaced if you match the firmware correctly as it is separate from the drive itself, or it is bad sector trouble due to the drive needing remagnetising using HDD Regenerator. At worst it'll be physical damage due to a bad knock where the heads have hit the platters, that's the only fault that isn't easily fixed. Physical damage to platters or heads will need the platters taking out and sectors dumped, or the head stack replaced and recalibrated by a specialist.
A hard drive's platters thankfully can't do what a flash chip (like my old Muvo did) can in catastrophic failure and turn into RAM where all data perishes upon power loss. 99% of the time a hard disk data is safe and can be recovered, even if it does cost £700 at a specialist to remove the platters and do a raw sector read.
The exception to that is of course if you put a big magnet over your hard disk, that will wipe out data instantly and wreck the track/cylinder patterns! I've done it, it's fun to listen to the drive clickety click after that! It was a shitty Seagate and I hated it anyway! Don't try it at home kids, I'm a data recovery guy, I know what I'm doing and what I'm talking about, as this post shows :)
I too once worked on a Purup Typesetting system designing custom business forms stored on 8" floppies before (EGADS) we got the 500MB mini - then it became the weekly process of archiving off all of the old completed jobs to make room for the next week of work. And we considered it a promotion to be responsible for this painfully boring task of swapping 8" floppies for backups.
I'd like to live in reality.
Having 256Gb in my laptop and a usb dock for 3.5" disks and a heap of cheap 2Tb diskdrives and doing backup of my whole disk each day with rsync and my own backup script (normally takes 15 min and I don't even notice it) I feel quite comfortable. One does feel comfortable when one is a Linux user.
"if a hard disk starts going glitchy, I can usually back stuff off it before it's too late."
Bit like the old wooden pit props, which would start creaking before they snapped, giving miners time to get out of the way. They were replaced by steel props, which gave no warning and so weren't liked.
Biting the hand that feeds IT © 1998–2019