No story here
"That may well be the case, but this "product" has no name, no price, no availability, and no detailed IOPS and endurance information, making" this Register story totally premature.
Samsung has started mass production of the world's first QLC (quad-level cell) consumer SSD. QLC is 4 bits/cell flash technology and a next step in cell bit capacity from the current TLC (3 bits/cell). Foundries belonging to Intel/Micron, SK Hynix and WD/Toshiba have started producing QLC chips and, in some cases, SSDs already …
calling it a consumer device, then failing to release the pricing is telling for me. I pretty much follow the saying "If you need to ask the price, then you can't afford it".. And given Samsungs ambition to be a "premium" manufacturer, I reckon this is gonna come in about £750-1000.
But wait, what would TheReg be without premature clickbait? It's the entire reason we come here instead of some reputable site, isn't it. Here I am, for example.
TheReg is partially right, even with its trademark suspect reasoning: hard drives are already mostly dead in PCs. Well, except for the really cheap ones and that island is steadily shrinking. And except for media packrats who rely on hard drives to archive their porn collection.
Hard drives are alive and well in server farms. While SSD continues to nibble at the edges, the economics of mass storage that doesn't mind 10ms latency are compelling. That amounts to the vast majority of data storage in the world. With the likelihood of further improvements in areal density, it will be many years before hard drives are as dead as tape.
"Tape is dead?"
It is in the consumer sphere. When was the last time you saw a tape drive at the local Best Buy? At least when QIC drives were around, consumers with a bit of cash could use them. No such analogue exists today, much as I wish there was, as we could really use some reliable way to archive a few TB at a time of stuff. As of now, the closest solution out there is rotating external hard drives.
Each time you add another level, you add complexity to the drive circuitry plus you reduce endurance and so you need more and more complex error correction algorithms to safeguard the data. And all for less and less gain in capacity. From 3 levels to 4 only gives you a one third increase, and (heaven forbid) from 4 to 5 will only give you one quarter. Is it really worth it?
" Is it really worth it?"
Not in my opinion. These cells store multiple bits by using multiple voltage levels instead of just on/off, high/low or +ve/-ve- ie its an analogue system with all its inherent problems. There's a reason we switched from analogue to digital computers 70 years ago and those reasons haven't gone away. One of those reasons is that as components age the charge they can store drops. Thats not a problem if its binary since generally it has to drop a long way before a 1 becomes a 0. However if you have multiple analogue voltage levels it won't take much degredation to flip a 4 to a 3 or 3 to 2 etc.
and yet data 'digital' transmission for cable Internet and digital TV transmission gained massively from using QAM16, QAM64 and QAM256 etc.
This works by using analog levels to cram more bits into the signal, which itself is carried on a carrier wave.
All we care about as consumers is the end result, price, reliability, performance etc. If they make it work and it keeps working, the magic required, be it error correction or redundancy will not matter.
After all, the thought of the head distance to the platter for spinning rust and Giant Magnetoresistant heads had all the naysayers telling us it could never work/be reliable - I think modern hard drives are a miracle to this day.
"This works by using analog levels to cram more bits into the signal, which itself is carried on a carrier wave."
Everything is analogue if you go down far enough - the point is where the analogue values come from. The relative levels of amplitude, phasing or frequency compared to the carrier signal strength of a radio signal will always be the same as its actively generated. The hardware in an SSD however is fixed and so maximum voltage levels in the cells will decline as they age, but more to the point - not uniformly between them. So the firmware can't simply adjust its voltage level parameters to account for it.
> Everything is analogue if you go down far enough
...and if you go down further than that, it's all digital/quantised again! (^_^)
(Disclaimer; yes, I know some phycisist will probably come along and point out that this is misleading, inaccurate or oversimplified).
>> Everything is analogue if you go down far enough
>...and if you go down further than that, it's all digital/quantised again! (^_^)
As a physicist, I know that once it gets small enough, everything gets uncertain.
How many electrons have to jump how far for the state to change in a QLC "gate" is the big question, and how long does it take. If the elections take an average of 50 years to tunnel, that's going to be lead to a high error rate across 4 trillion of them.
However, I suspect Samsung will have thought of that.
> The hardware in an SSD however is fixed
More or less, but it changes areas dynamically from QLC to SLC or somewhere in between as the drive decides it needs caching or not - and it's constantly moving things around to keep everything healthy, plus the level of error correction being applied is mind-blowing and adjacently addressed bits aren't necessarily stored physically adjacently.
> and so maximum voltage levels in the cells will decline as they age
These aren't electrolytic capacitors with leaky insulators. They're silicon electron wells - about the best insulated form of FET you can devise. It's about coulombs, not volts.
> but more to the point - not uniformly between them.
They already do.
> So the firmware can't simply adjust its voltage level parameters to account for it.
The firmware already does and is already dynamically recalibrating itself over areas of the die to account for ion drift, else large chunks would become unusable very quickly. That's the point of having all that processing power onboard to actively keep track of and manage the health of the NAND.
Samsung wouldn't be shipping QLC without a large level of confidence in their product - and whilst they put a 3 year warranty on their _consumer_ drives, WD and Seagate have so much confidence in their consumer devices that they best they'll offer is 2 years(*), but more usually 12 months - and I've had to replace far too many drives under warranty in that 2 year period for purchases made since 2011.
(*) One of their fabulous weasel antics is to refuse to honour warranties on anything sold via an OEM and point the customer back to that supplier - meaning if they gave you a 6 month warranty or went toes up, that's what you got. Samsung have zero quibbles about directly honouring warranties.
"Everything is analogue if you go down far enough - the point is where the analogue values come from. The relative levels of amplitude, phasing or frequency compared to the carrier signal strength of a radio signal will always be the same as its actively generated. The hardware in an SSD however is fixed and so maximum voltage levels in the cells will decline as they age, but more to the point - not uniformly between them. So the firmware can't simply adjust its voltage level parameters to account for it."
So maybe they'll do a really slow refresh of cells, like DRAM with a really long refresh cycle?
We were still seeing mainframe computers with analogue/digital architecture in the 1960s. Indeed, they were popular enough to have two flavours depending on which technology "drove" the beast.
I'm 63 and I remember them from when I was about 11. So the "70 years ago" figure isn't anywhere near right.
I believe it also mis-states, albeit contextually, what an analogue computer is and does. Analogue computers, which were still available from Heathkit and other suppliers in the 1970s, are spectacular for modelling continuous solutions to calculus problems. They don't do arithmetic, at least not well, and the one's I've seen and used are not programmed using a high-level computer language, but with a series of patch cables linking the various integrator circuits - rather like the old DX7 used patches (albeit digitally executed) cross connected the six operators that made the noises. The Analog Computer at Coventry Tech was used to model n-body motion issues and on open days was used to display a snooker game.
The analogue computer was thought to be important when digital computers had low clock rates and no memory to speak of. Now the discontinuous nature of the calculation can be hand-waved as too small to matter, and he results can be smoothed using mathematics anyway now there is memory available for the functions involved.
But years ago that wasn't the case.
I'm not sure why you feel the issue of bits flipping can't be mitigated the way it is for "traditional" storage techniques (which can also fail in this way) by use of a checksum. I believe SSD storage has other on-chip mitigation stuff too that deals silently with cell failures, though I'm not clear on the details.
I did some work on a huge machine that converted movie film into video. It was a complex digital and analogue computer. The machine was decades old but still in use because the job could not be handled digitally in 1998 (according to the machine's owner).
Digital is a huge waste of power and bandwidth. It takes about 7 parallel digital circuits to match the precision of one analog circuit. When it comes to mathematics, digital needs massive gate arrays and microcode to perform the same task as a handful of analog components. Propagation delay hits big digital circuits pretty hard and workarounds further increase complexity. Analog computers are still alive and well for any time speed and efficiency is more important than precision.
I suspect the AI singularity will happen when analog and digital processors are efficiently merged together. Last time I read about it, flash cells were going to be the parameter buffers between the two.
You are a bit confused about the distinction between analog and digital. At the nanoscale, its all analog (and at femtoscale it's all digitial again that's a bit deep for this post, just google quantum number). Those rather alarmingly analog-looking traces are coerced into representing digital values via thresholding and latching effects. Ever seen an ether bird? It's a bit like that. A wind-up clock is like that too: the spring is analog but the tick is digital.
Multilevel cells likewise rely on thresholding and latching effects, there is just more than a single threshold value. Still digital by nature, just like single level cells, Maybe the threshold voltages or whatever get closer together in multi-level and risk more errors, or maybe not. Single level logic gets shrunk and packed together as tightly as possible, which also increases the risk of error. Given the same number of bits stored in the same area, it is far from clear that multi-level cells have the higher risk of error, it may be just the opposite.
Each time you add another level, you add complexity to the drive circuitry plus you reduce endurance and so you need more and more complex error correction algorithms to safeguard the data.
And conversely, HDD technology is doing fancy complex stuff like HAMR and shingling, increased track density. So what if the SSD needs advanced software to make it work - we're all pro-technology here, aren't we? And you're reliant on error correction to read this web site and post here, or in almost any form of digital audio operation, from making a phone call to listening to music.
The other thing is that all the studies I've seen (ignoring those from HDD and SSD makers and suppliers) suggests that notwithstanding the known SSD endurance limits, the service life of SSDs is comparable to enterprise HDDs, and the in-service failure rate is considerably lower for SSD than HDD. Whilst it is reasonable to remain sceptical about QLC as with any new technology, I would expect it to do what it says on the tin. I will cheer on the early adopters.
I would say it's less about familiarity and more about cost. HDD is still cheap enough (relatively) that you can have multiple copies for the price of one piece of replacement SDD media.
Tech continues to improve on both fronts. I can almost put my entire hoard on a single piece of spinning rust now. I don't even want to contemplate the cost of duplicating all of my data with SSD.
Other entities have WAY more data than I do.
re. "all the studies I've seen .. the service life of SSDs is comparable to enterprise HDDs"
Well those studies must be based on SSDs that have been in use for 2/3+ years, which probably means they are SLC or MLC (2 level), not TLC / QLC. Even the manufacturers of TLC / QLC admit their endurance is less, which is why they are targetted at consumers rather than production systems. The other worrying trend is that these short lifespan SSDs are increasingly being integrated into devices, so when they fail the whole device is a write off.
Using voltage levels to cram more data on a comms link doesn't justify doing the same for storage devices. A comms link only has to get the right data once, if an error is detected the data is re-sent. A storage device needs to store the right value indefinitely, if an error is detected, an algorithm must guess what the correct data was.
Really_adf correctly noted, "Adding a bit doubles the number of different values a cell can store."
It works the other way too.
One must unfortunately double the number of different voltage levels the cell measuring subsystem can reliably distinguish to add just one more bit.
If they're at 16 levels for 4 bits per cell, the next whole step would be 32 levels for 5 bits per cell.
I expect that they'll invent fractional (smeared) bits first. Maybe about 24 levels and about 4.5 bits per cell.
> Every level adds one bit of information per cell
Hmm is that so? To encode 4 bits you need 16 distinct voltage levels, no?
In that sense the L in QLC is is not a "distinct voltage level" but more than that. If you only had 4, say, 0, 0.33, 0.66 and 1, you could encode 2 bits. Just like the old NeXT monochrome looked good with 2-bit graphics, showing black, white, light grey, and dark grey.
"To encode 4 bits you need 16 distinct voltage levels, no?"
Right, and so L)evel has always been a gross misnomer, it should be B)it. A multi-bit flash cell has 2**L voltage levels, using the highly misleading L)evel terminology.
Like this: https://www.cactus-tech.com/assets/images/e/MLC-NAND-Cell---4-States-of-Electrons-9a049c6e.png
"thus doubling the capacity".....
....and ironically, exponentially increasing the ignorance.
Kudos to the guy who explains correctly that it doubles the charge states, which doesn't double the bits per cell.
For people who have missed the point, I'm just off to practice my two times table according LeoP:
1 (SLC), 2 (MLC), 3 (TLC), 4 (QLC),......
"QLC (quad-level cell)....is 4 bits/cell"
Then it's 16 Levels (not "Quad" = 4 Levels).
Sixteen levels can define 4 bits. And 4 bits can define 16 levels.
They should find the originator of this "Level" misnomer, and give them a good slap. It doesn't even reach the lofty heights of being a dumb error, it's worse than that.
Biting the hand that feeds IT © 1998–2019