Re: As much as I
By the time you're down poring over the source code, the manufacturer will no longer be selling the device in question.
119 posts • joined 22 Jun 2009
By the time you're down poring over the source code, the manufacturer will no longer be selling the device in question.
If they're found guilty, they're guilty. No amount of pleading how their service is oh-so-great will help them.
It's said that it's easier to ask for forgiveness than for a permission. They're about to find out that's not the case. They will be outlawed and fined, and it's guaranteed that le gouvernement will stonewall and never approve Uber or similar services.
They could have started by talking to the govt. and asking for regulations and confirming that there's nothing illegal or illicit in their activities and then launching the service, to the astonishment and powerlessness of taxi drivers and local councils.
They could have gone to the EU Commish and ask to regulate Uber EU-wide, then they'd have the backing of at least some directives that they could throw in the face of the French, and claim good will.
Now, they've thrown good will out the window, they barged in, flying in the face of regulations and they expect a pat on the head? They'll be lucky if they avoid jail time and if their drivers don't get fined [too much].
What will happen next? Uber will appeal to the EU, but since they demonstrated they're not above breaking the law to carry their point across, nobody will listen to them. That way, they really shot, nay, blew away, both their feet in this pointless exercise.
If you need to work on some files from home, isn't it easier to take a pile of work home on Monday and return with it on Friday? Or have a courier pick up a flash drive and deliver to the office?
At 19 Mbps, ten gigabytes still takes almost 1.5 hours to upload. If you have a hundred, you're looking at 14.5 hours upload. With a flash drive (or an external USB disk), suppose at 40 MB/s read/write sequential speed, it takes 42 minutes to copy to or from (4.5 minutes at 400 MB/s read/write speed), add 2-3 hours, and you're looking at about 4-5 hours (or 3 hours) for delivery of complete content.
There are limitations, of course, and not all jobs will support that approach, but it's a solution for some at least.
To be honest, not really. Core 2 Duos are at least 65 W TDP, you'll need an expensive chassis for that (like Akasa fanless). Plus, the Core 2 Duo doesn't have an integrated GPU or any modern media acceleration features (there's support for hardware DVD decoding on some, but nothing HD), and none of its contemporary motherboards had a reasonable integrated GPU, so the CPU is going to be running at top frequency almost all the time, you'll be hitting that 65 W TDP all the time (add in ca. 25 W for the motherboard, plus a few watts for memory, it all adds up), and if you plan on watching anything in 1080p, you should expect some stuttering (plus a known Intel bug with frame skipping).
At some point, it's cheaper to start from scratch. I already mentioned AMD's AM1 platform. There's also AMD's A4-5000, integrated with some mini-ITX motherboards (including several fanless solutions), but it runs at 1.5 GHz, although it also has 15 W TDP, not 25 W like AM1 (although you can upgrade AM1 if there are new SoCs in the future).
Obviously, there are also Intel solutions. If you don't plan on playing games, there are some nice Atom and Celeron platforms that can also run fanless. To be honest, I don't know much about Intel's current lineup, so I can't recommend anything specific.
You will want something that runs from an external power brick, since you don't want to add extra heat sources to the inside of the chassis.
AMD's AM1 CPUs (all the way up to Athlon 5350) can run with passive cooling. In fact, they can run without a heatsink (they'll get hot, but not scalding), adding a cheap heatsink solves that.
@Esme: Oh, I'm not questioning the greenhouse effect at all. I'm questioning the contribution to greenhouse effect from carbon dioxide. Climate boffins are suggesting that if we keep pumping CO2 into the atmosphere, the temperature increase will stop supporting life on Earth and that we need to take action now.
Supposing that CO2 concentration will increase by 100 ppm over the next 50 years and that has terrible implications, it would mean that carbon dioxide is an extremely potent greenhouse gas.
However, the opposite would be true as well -- dropping CO2 concentration by the same 100 ppm would also have terrible effects. Yet we went from 300 to 400 ppm in a course of less than a century and almost nothing happened*.
What would happen at 0 concentration of CO2 in the atmosphere (other than plants dying, that is, which would actually happen at much more than 0)? If the potency of CO2 were so great, at zero concentration Earth would completely freeze over. It wouldn't? Then it means there are vastly more potent greenhouse gases and messing with CO2 is simply something that scientists know is perfectly safe since we can't screw it up.
*) Yes, I'm aware of climate inertia. At the same time, I see graphs showing a steady increase in temperature starting in mid-19th century and correlated with CO2 concentration, implying causation and very low climate inertia. You can't have it both ways. There either is inertia in which case we're doomed anyway, since the effects of current concentration are unavoidable and lowering CO2 concentration does sod all, or there's no inertia and our efforts then make sense in theory, but in practice, it's impossible to show any great impact from carbon dioxide.
Sure. A mass extinction. Something that anti-humanity movements have been advocating for a long time, assuming it would mean humans would be among the ones to go.
I saw data showing that carbon dioxide concentration went up from 300 ppm to 400 ppm since Industrial Revolution. That's about 140 more billion tonnes of carbon in the atmosphere, but we burned a lot more coal than that, not to even mention oil or gas, so the ecosystem is already compensating for the change (or possibly gaining, since such impressive compensation implies explosive growth?).
At the same time, summer temperatures didn't suddenly go from 300 K to 400 K, so it's obvious that even if the greenhouse effect exists, it's difficult to measure its magnitude, and it's possibly exagerrated.
Surely enough, scientists 25 years ago predicted that if we don't reduce our carbon dioxide emissions, the glaciers will be all melted by 2005, and average temperatures would reach 70 °C by 2015. By 2050, Earth would have the same atmosphere as Venus. It scared the shit out of me as a kid.
No only have we not reduced emissions (if I realized that then, I'd be scared senseless), but the emissions are constantly increasing (I'd probably develop paranoia and depression if I knew about it), and the glaciers are still there.
I'm sure I'm not the only one that heard those predictions. How do climate change alarmists expect people to listen to them if their lower-range predictions are not fulfilled, never mind the scary stuff?
"Throughout this period, levels of CO2 were four to six times higher than the levels we observe today, but the findings do indicate that if we continue our present course of human-caused climate change, similar conditions could develop and suppress equatorial ecosystems."
So, Earth can support 4-6 times higher levels of CO2 in the atmosphere and the temperatures will still not reach Venerian levels? Basically, after we burn all coal, oil and natural gas, Earth's atmosphere will be like in the Paleozoic/Mesozoic era?
Really a far cry from the alarm that climate scientists are raising.
2 TB for 200 bucks? I'd take five in ordinary SATA form factor at SATA 3 Gbps read speeds, and I'm okay with writes at 100 MB/s.
"The debate, from anyone that understands the theory of climate science, should be the idea of climate inertia - any alternation in behaviors regarding the climate will take years, if not decades, to aggregate with enough effect to appear to have an measurable difference."
And yet CO2 concentration growth from 80s to today are charted along with average temperature increase over the same time (with axes specifically ranged for the two lines to match). Or, how CO2 concentration is charted from mid-19th century along with a steady temperature increase.
The article mentions the 1990s.
And yet, nobody managed to win that million bucks from Oracle for building a generic solution that would at least meet the capabilities of an Exadata. Yet? Maybe never. If you can get better performance for the same price with an appliance, why bother with building a solution yourself that you would have to maintain later, not to mention Oracle will simply reuse whatever unique optimizations you used on the next Exadata.
So that statement about performance tradeoffs is not really true. Oh, so you mean *an analyst* said that? Well, never mind then.
They were supposed to reduce the amount of hazardous substances in electronics, not replace them with other.
It's only because of a misunderstanding of how tape is written -- tape inventory is read on every mount and written to every time there was a write to tape, so this region of tape sees heavy use and without the inventory, the data may be safe, but seek times will be dreadful.
Hence cartridge memory in an RFID chip, but it's not a magic bullet, since EEPROM also has its limits.
Quite more than a few times, in fact. That alone probably kills the idea of an optical archive. Well, that, and the fact that a lot of people had very bad experience with optical media.
There are two small high-density rackmounted libraries on the market. BDT makes one for a number of OEMs, and Oracle makes SL150.
BDT's density is 8 cartridges in 1 rack unit, and then 24 cartridges per 2 rack units in 2, 4 and 8 unit form factors (top density of 12 carts/RU).
SL150 scales from 30 cartridges in 3 rack units to 300 in 21 units (top density of 14+ carts/RU).
At LTO-6 cartridge density, this adds up to 30-35.7 TB/RU.
This Sony contraption looks like 4 rack unit height, so it's only 12.5 TB/RU. And mind you, those high density bluray disks are not cheaper than tape per byte.
Because it's hard for non-tech companies to grasp certain key concepts. Thanks to Hollywood, they have very flawed analogies and terrible understanding of security exploits.
They work on a safe wall analogy. Suppose it takes two weeks to drill a hole in a safe wall. Day after day, the safecracker drills away at it, and after 14 days, he succeeds. To them, a hacker does mostly the same thing. There is an attack, security becomes weaker, and after a certain number of attacks, they're exposed. So they "harden" their systems to sustain more such attacks, much like a bank might install thicker walls, electrify them, and so on.
They also imagine that their IT security team (if they have any) actively engages hackers to mitigate such attacks (again, thank you, Hollywood).
So for them, there's no concept of "next time". They don't understand that their systems have exploits that completely circumvent every safeguard there are in place. And to them, it's completely acceptable to them that a hacker whittles away at their systems. After all (another set of flawed analogies):
- it's just one person;
- even if he succeeds, the damage will be limited;
- nobody else will be able to use his exploit.
I realize that hacking is not ethical. I realize there are no "victimless crimes". I can't say that I wish they are hacked over and over until they learn. I won't even say they deserve being hacked.
However, pride goes before the fall. They leave themselves completely open for exploitation. There will be people who take advantage of this. The next hacker that comes along may not be a white hat, or even an off-white hat. And the inevitable next exploit may crush the company completely.
How thick is an average trash bag? What is the minimum and maximum thickness for trash bags? I know that below a certain thickness, things labeled and sold as trash bags are not fit for purpose (unless the purpose is to bag them in trash).
Old Terra can't support life anymore, I see?
Suppose a hacker gains access to the infusion pump. What can he do? Either increase the flow or stop it. Increasing the flow is impossible beyond a certain level which is probably not going to cause much but a certain discomfort. Stopping the pump completely will be quite obvious, so if a patient is told to walk around with the pump for an hour or two and the medicine is still not fully administered by then, the doctor is going to know what to do (I'm sure they deal with pump failures from time to time).
And what danger is there, anyway? All the contents of the bag were going to end up in the patient one way or another. It's not like the pump is plugged into a pipework of all the drugs available in the hospital, so it's not like an exploit is going to put arsenic or cyanide in the mix.
Until hackers figure out a way to deliver chemicals over WiFi, I think it's fair to say we're safe.
The vulnerability is certainly alarming. Not because of the potential risks, but because of the carelessness.
Why does the chart y-axis scale go all the way up to 120%? Any value on that chart can only be between 0 and 100%.
On the other hand, maybe I should be thankful that they nailed the bottom down, I've seen percentage charts go from -20% to 120%...
Eisenhower once remarked: In preparing for battle, I have always found that plans are useless but planning is indispensable.
Disclosure: I work for Oracle Tape.
You didn't search good enough, or you'd be aware that what you actually needed is a library supporting SDLT or SDLT320 drives (assuming the user had DLTape IV, God help you if they had DLTape III or earlier, but still doable).
In case of Oracle, it would be an SL500 or L180/700 -- both are end of life, but recent enough to actually find drives for in good condition -- those first and second generation SDLT drives are usually in very good working order and assuming you're migrating the data to disk or to new media, even tens of thousands of tapes isn't a scary prospect, since DLTape IV supported at most 40 GB natively per cartridge.
I don't know which DLT generation your user had, but even if you found a DLT-based library, you'd probably have problems with finding HVD SCSI HBAs to attach to the drives. The real reason you could not find a library sporting DLT drives is because it's been end of life for so long that it's obsolete by all modern standards and 99% of customers moved on.
Even if it was a problem getting a Storagetek library for your case, I'm fairly sure Quantum would jump at this chance.
About the retention periods -- you seriously think that using disk drives is going to solve this? Suppose you put it on a MAID array today using state of the art 16 Gbps Fibre Channel, 40 Gbps Ethernet or 3rd generation SAS. Are you sure you're going to be able to access that array in 15 years?
- It's impossible to access first generation 1 Gbps Fibre Channel arrays with 8 Gbps HBAs and switches. That obsolescence came in just 12 years. It was impossible to find new disk drives to replace failing ones 7-10 years from introduction of these arrays.
- It's not possible to connect 10 Mbps Ethernet to some 1 Gbps switches, and to no 10 Gbps switches. Not to even mention coax standards. It's probably easier to find legacy consumer stuff for this and step down with switches supporting lower speeds, but if you said that's your solution for future access that array, you'd be laughed out of the data centre.
- Like Fibre Channel, SAS only supports negotiating a link down to two generations back. Next SAS generation will not negotiate a link with first generation SAS.
And now let me go over your points:
1. We still support 9840 tape drives in new tape libraries (SL8500 and SL3000), originally introduced 1998. Heck, we still support 9490 tape drives, introduced 20 years ago (although the libraries in which they are used are end of life). New T10000D drives still support reading from cartridges written by T10000A drives introduced in 2006.
2. That's completely irrelevant. How is that an issue with tape? It's exactly the same regardless of whether you use tape, disk, flash or anything else today.
3. That wasn't a problem since basically forever. With 9840, you can access over 50% of blocks on tape within 8 seconds of mount, and any block on tape within 20 seconds. If you know which file mark you're looking for, this is stored in the media information region. Same applies to all modern tape formats, which take at most 90 seconds to spool the whole tape if it turns out that the data you're looking for is at the end of media. Serpentine writing means that the data is more evenly spread across tape.
With LTFS, it's even easier, since the tape is effectively presented as a block device to the OS -- there are two partitions, one has the file layout, the other is actual data.
True, it's still impossible to read data backwards, so if the file is stored over the entire length of tape, but starts at the end of it, it will still add 90 seconds overhead to reading the file.
4. It's called Storagetek Tape Analytics and it's meant to do exactly what you say here -- mount a tape at preset intervals, read the media information region and either do a full tape read or read random bits to verify it's not degrading too much.
Re-writing will occur if the margins are getting too thin.
And there's now Xcopy to seamlessly move data from one cartridge to another without host involvement. There's a lot of exciting stuff happening that you're completely unaware of.
How about efficient physical delete on a disk drive? Oh, not possible? Again, how is that a tape problem specifically?
Efficient physical delete on tape? A few seconds in a degausser does the trick. The tape is completely blank and unreadable, including the servo tracks, making it completely impossible to read from.
And with hardware-based encryption, there's really no reason you should worry about logical deletes.
5. Again, it's not a problem specific to tape. If in your organization employee attrition, changing priorities and laziness allows anything to get out of control and ignore processes, you have much bigger issues at hand than worrying about tape obsolescence.
6. So, disk drives don't deteriorate, huh? They do, and much faster than tape since magnetic domains are much smaller. Seriously, if you only write to tape once (as should happen in a proper archive), the retention period is way more than the guaranteed 30 years.
7. Disk drives don't dedupe, either. So what? There are three approaches to deduplication on tape:
- Don't dedupe. Retain integrity in every object/file you store. That prevents any problems with being unable to read from tape in the future.
- Write raw data from your deduplicating arrays to tape. It's the most efficient method, but only if your array supports that and you're sure the manufacturer will be around when you need to restore the data. It probably makes sense for short-term backups when you don't lose track of data and would need to restore specific portions of your storage, but definitely not for long-term archives.
- If you have a lot of similar files (that dedupe well), offload them in a single compressed image to tape -- or in multiple images, where the deduped blocks are stored in line with the rest of the files. It's a compromise and it requires some capability to read the data in the future, but it could work if your archive assumes you would only ever restore most of the files from the archive or when it's done well and you don't reference more than one tape.
Anyway, deduplication is a foolish solution for a long-term archive. If you did dedupe, you'd quickly have a situation where restoring a single file from archive involved reading bits and pieces from a number of tapes ranging from one per file to one per deduped block. And if you somehow lost the unique copy of some particular block common to all files in your storage system (as happens in improperly configured deduplicating solutions), you'd lose all data.
8. Here's a news flash: disk drives are not cheap. Enterprise drives are still over 10 times more expensive than tape per byte. And for enterprise products (like Oracle's T10000D), cost of storage per byte is lower than cheapest consumer hard drives today. An 8.5 TB cartridge costs about the same as a 1 TB disk drive.
Let me rephrase what you said: In a world of very cheap tape, putting EVERYTHING on disk is just plain STUPID.
And to rephrase your last paragraph: Any IT professional that doesn't examine the virtues of every available solution should be tarred and feathered. Horses are definitely nice animals, and they shouldn't be used to execute anyone.
webmail not google
guess what, the results are only for gmail. Which is obvious, since google, in their infinite wisdom, removed boolean search operators (not to mention literal search).
Minerals are fungible. Services are not.
Plus forgetting Russia includes Kaliningrad Oblast, Sakhalin and northern islands, and shading Portugal blue although it was not mentioned anywhere in the document (which is puzzling in and of itself).
And a write hole and performance hit when writing small blocks.
If anything, Huffman coding on a block level or mirroring (RAID 1) would make more sense -- more expensive, but more reliable. It could still be cheaper than a very reliable cell.
It's year-on-year, or it's useless. The previous quarter appears to have been an anomaly, so frankly, anything will look lackluster compared to it.
Mojang knew it and didn't do anything about it? Could Mr. Persson have been too busy counting his billions to actually care?
He basically got paid for a defective product. It's a possibility that he may have willfully withheld this information from Microsoft to conclude the sale.
What now? What could Microsoft do? Sue them? Get some money back? I'm really curious, I don't think it's going to go down too well.
What's stopping registrars from creating .suck, .blows, .blow, .isapieceofshit and so on?
@dan1980: Apostle Paul reasserted this in Corinthians, so it's not just the Old Testament. And if you follow the New Testament, you would understand how the Old Law was reinterpreted in light of Jesus's ministry.
Note that this is about marriage. As such, Bible says marriage is limited to heterosexual couples. And sex outside marriage is a sin regardless whether homosexual or heterosexual. You can ignore this, but don't force Christians to ignore it.
Otherwise, in your pursuit of tolerance towards homosexuals, you are becoming intolerant of Christians. And nobody said they wouldn't cater to homosexuals specifically, but rather wouldn't cater a gay/lesbian/etc. wedding. This is a marked difference.
Suppose a Catholic is running a business (I don't want to meander around specific approaches to divorce in various Christian denominations). Not only should they have the right to decline catering a same-sex wedding, it goes further, including refusing to cater to a wedding of a divorced couple (if one or both were earlier a part of a Catholic marriage), because taking part in something they disagree with does make them appear as supporters of this idea.
@Richard 12: Doesn't it? In that case, I could set up a religion which I would be the only member of and I could claim anything I want to derail any argument that starts with "No religion does x", or "Every religion is y".
@Suricou: I honestly don't know how to reply to your question. However, let me ask if you know any significant religion that is opposing interracial marriage on religious grounds and decrying it as sinful?
@thomas k.: you're saying that refusing to accept a particular job is not the same as refusing to provide any service to an entire class of people.
What about that "Sweet Cakes" case, where a baker from Oregon refused to provide a wedding cake for a lesbian wedding? They had no objections about serving gays or lesbians, I don't know, let's say cupcakes or coffee, but would not cater a wedding. Does that count as a particular job, or as refusing service to an entire class of people?
By the way, it wasn't the only bakery in town. That area had some 18 confectioners that the couple hasn't asked. I can understand that if all 18 refused service, the couple could claim collusion and would have solid grounds for legal action. But in this particular case? I don't think so.
I also hope AMD wins this, but even though problems were well-known, GlobalFoundries was only established in March, 2009, up until when it was part of AMD. So it was their problem. Establishing GF did not solve the problems, it did not even have potential to solve the problems they were having.
Once AMD loses that suit and is forced to pay damages, its stock will likely plummet again. Then investors start a new class action, accusing AMD of not informing investors of problems with its manufacturing, and as a result, losing a class-action lawsuit that resulted in its stock price plummeting.
Repeat ad infinitum, AMD has loads of cash that it can spend.
This ties to Oracle, too -- who would IBM offload their tape business to? If they just up and discontinued it, it would leave Oracle as the only supplier of Enterprise tape storage, and HP as the only manufacturer of LTO drives. Quite uncomfortable for Spectra Logic which uses IBM drives in their libraries.
"The Falkland Islands will be a barely habitable wilderness, afterwards."
That would be an improvement, wouldn't it?
I didn't mean that. But, if he's dead, he can't influence the proceedings anymore, and the thing might fall apart.
I applaud what he's doing and I really hope he succeeds, but people have been killed for less...
So you're saying that transferring user data to the US National Security Agency (NSA) doesn't protect EU citizens' privacy? Whoduvthunkit?
I wish I could give you 100 downvotes, but I'm limited to just one.
"Really would your Nan really give less then Two about doing what ever it was on a PC or a Phablet?"
Yes. But she's a touch-typist. But for my great-grandma, the screen would probably go blank before she found the next letter on the on-screen keyboard. And she'd have to squint quite a lot to make out some of the letters, be it on a phablet (strange of you to even mention a phablet as a viable choice), or even a large-screen tablet. But then again, a large, 12-inch, tablet costs as much as a basic PC with a 24-inch screen attached.
If you have 'umpteen thousand' machines running XP and you can't afford to upgrade them to a newer OS (be it a hardware or just OS upgrade), then your organization definitely has bigger problems than those machines:
- being broken into;
- suddenly becoming unsupported platforms for critical software.
If your organization could afford 'umpteen thousand' machines some 10 years ago and it cannot afford new ones now, you definitely have bigger problems than just IT.
Ok, cool, there's a use case.
- Did your PC stop working when XP support was discontinued?
- Did your software stop working when XP support was discontinued?
No? So what's the problem? Disconnect the PC from the network, or at least from the Internet and put a separate PC running a newer OS as proxy for the embroidery jobs.
Why did you want to virtualize it? How would it help your use case? If you put it in a VM on a secure OS, it doesn't suddenly become secure (if the VM is connected to any network).
Besides, even if you were virtualizing, you were probably doing it wrong. Did you use IOMMU (any sort) to virtualize the hardware that actually connects to the machine?
Four words: correct horse battery staple
Just be sure not to rely on one [walkthrough] utterly, as otherwise you’ll beat the game without any of those "Eureka!" moments of satisfaction.
Funny, I don't think people had "Eureka" moments in the E.T. game (yeah, that one) when they managed to get out of the pits.
And "Eureka" moments in adventure games in the olden days usually went like: "combine this broken metal rod with that plastic bottle in order to create a thingamajig that will combine with that torn piece of clothing which you then give to the king who will give you a cart full of dirt to take to the wizard".
Not in the least logical and if you didn't buy the official guide (usually about the price of the boxed game release!), you needed to hope for your games magazine would print tips for the game you're stuck with and that they would include that specific location and not the ones you guessed.
Eureka? Fat chance. More like sheer frustration.
This transportation system is point-to-point, and specifically between two population centers. They won't build loops, but two tubes. The train will be switched from one to the other at each end.
Indeed, it's costly. But Musk is betting the cost will be below potential revenue, and why not? If it works, and if it turns out it's able to turn in a profit, it can be extended (to multiple tubes, and to more connections).
Are these statistics even reliable? I live in Poland, and according to the first chart, the bottom 10% of population in Britain are at a worse socioeconomic status than the bottom 10% in Poland.
How the fuck is it possible that people would want to migrate to Britain, then (or Germany, or France, for that matter)? Assuming that it is the people on the bottom of the scale who migrate looking for a job and they can only get the least paid jobs in the UK, it would mean that they are worse off than they were before they moved. However, not only are they able to support themselves in UK, they make enough to send back home and support the family. Effectively, one paycheck is able to support two households.
Anyone care to comment?
"Each shrink in NAND geometry seems to require costlier manufacturing processes and more over-provisioning to keep endurance, expressed as drive writes/day for five years, up at acceptable levels."
And shingling, TDMR, HAMR cost nothing to implement?
Between 2000 and 2010, over some 10 years, mainstream hard disks went from ~40 GB to 2 TB, a 50-fold improvement without increasing the price too much (allowing for the overblown Thai flood hype). That's an average increase of nearly 50% per year. If this pace kept up, mainstream disks now would be almost 10 TB in capacity, and yet mainstream is still at 2 TB, very slowly moving towards 3 TB, never mind 4.
The recent breakthroughs that will allow more than 4 TB are ridiculously expensive and it seems that disk cannot break through this ceiling.
Going by the same chart, I see that SSDs are predicted to grow by almost 2000%, while HDDs only by 123%.
However, assuming the data for two last years and estimate for this one are accurate, this is an interesting extrapolation.
It predicts that HDD growth is expected to increase or keep at a steady rate, while SSD growth is supposed taper off, astonishingly so -- it grew by 120% in 2013, then 85% this year, and they are expecting this trend to continue and growth to decline further, while HDDs are not going to be affected at all? I call bullshit.
Oh, and they've got SSD endurance wrong. Taking a 480 GB SATA 3 drive at maximum speed (600 MB/s=52 TB/day) and 10000 writes/block, assuming you never stop and you never read this data, you get 92 days of useful life. That's still extremely high endurance. Lower this by a factor of 10, to 5.2 TB/day, and you get almost three years of useful life. However, 5 TB/day on a 480 GB drive? Who writes (and overwrites) this amount of data daily? If there's a usage pattern that fits this requirement, I suppose the user is getting paid well more than enough to cover disk replacement.
They did? Awesome. Point me to the NVMe drives and a motherboard with NVMe ports, thank you very much.
Depending on the implementation and controller, TRIM will destroy any meaningful way to restore the data from an SSD.
If the disk uses encryption and/or compression, TRIM will prevent any restore of the data since it also drops all pointers to how the data is arranged, how it is compressed or what encryption key is used.
And theoretical methods to restore data from magnetic drives are unusable on SSDs, the cells of which deteriorate/degrade much quicker than magnetic domains on a hard disk. Even if you were able to recreate the bits, you've no idea what they represent, if the data is encrypted or compressed and you have no way to rearrange it.
As for the Gutmann method, Wikipedia has an excellent article about it, and Gutmann himself says it best -- 35 passes was never needed for any drive. The first and last four passes are with random data, and there are RLL (two methods) and MFM-specific passes. MFM "needs" 18 passes at most, (1,7)RLL "needs" 26.
In case of modern PRML disks, these MFM- or RLL-specific passes do nothing special and are completely unnecessary.