What I want to know is - have the reliability problems of SandForce controllers been solved?
Another week and yet another new range of Sandforce controlled drives has been pitched into battle in an already crowded market place. Still, here’s hoping the increased competition will impact on prices. Arriving hard on the heels of Intel’s new 520 range reviewed recently, is the Extreme series from flash memory experts …
"Once formatted, the drive's capacity drops to 111GB"
I'm sure there's a very valid technical reason why you lose such a big chunk of storage space when formatting a drive but I wonder if there's reason why manufacturers tend not to provide an oversized drive so that - in this case for instance - a 128GB drive could be marketed as a 120Gb drive so that once it was formatted, you'd actually get the storage the box was claiming. My first thoughts are "bigger is better and therefore able to be sold for more dosh" but there's perhaps more to it than that.
I'm presuming, of course, that there's not some clever way to 'access' all 120Gb on this drive, thereby justifying its 120Gb description.
To my not-very-technical way of thinking, it feels a bit like buying a pint of beer, only to find that once it's placed into the glass, 10% of your pint will remain trapped at the bottom of the glass.
Icon for obvious reasons.
I'd quite agree, especially given -
"To get to its advertised 120GB capacity, the drive uses four of SanDisk’s own 05091032G, 32GB 24nm Toggle MLC NAND chips"
When i went to school 4 x 32 = 128? So we're not losing 9GB here, it's actually 17GB gone AWOL somewhere, about 13% of capacity. Or am I missing something here (besides a chunk of flash)?
2 points to make:
1 : Yes, this has been a marketing ploy for a very long time. Base 2 and Base 10 being interchanged as required in order to "appear" to provide disks bigger than they really are.
2 : If the percentage that is not available is so important, then you really should be looking for larger disks.
Any extra cells used for wear levelling are invisible to the operating system as it's purely internal to the device.
The real reason for the difference is two fold. Firstly the definition of a GB is not 2^30, but 10^9. The difference is significant (about 7.4%). Technically speaking,2^30 would be 1 GiB, not 1 GB. File systems tend to report KB, MB & GB using a bizarre mixture of units. Very fften the base is a KiB (1024 bytes) - a legacy of the physical sector size being 512 bytes. Sometimes a MB is reported as 1,000 x KiB and a GB as 1,000,000 x KiB, sometimes a a MB is reported as 1,024 x KiB (ie a MiB) and a GB as 1,024 x 1,024 x KiB (i.e. a GiB). These sort of hybrid units are a real pain if you are working on SAN allocations, logical volumes and so on as it's easy to be tripped up.
Then the other reason is that when formatting a file system a considerable portion is used for things like directory structures and other areas not available for actual data. File systems tend to report what is left after that. The same loss of apparent capacity happens with HDDs.
Also, the prefix giga- means "multiply by 10 to the power 9". Thus, 120 gigabytes (aka 120GB) is 120,000,000,000 bytes.
But I can see where you've made a mistake, there: Windows erroneously labels gibibytes (binary gigabytes) as gigabytes.
The prefix gibi- means "multiply by 2 to the power 30". So 120GB would be about 111.76GiB, which Windows will label as GBs and therein lies the capacity "lost by formatting"... :-D
I hope this helps someone.
First of all, flash drives are "over provisioned", so that when some of the cells wear out, they can be turned off, and other, fresh cells used as replacements.
Secondly, the raw capacity quoted is the *total* storage capacity of the device. Of course formatting reduces that. Where do you think the OS stores things like the MFT and directory entries? The OS is reporting the available capacity, which is total capacity - storage overhead.
Quite so; compression = smaller data = less time to shift. Lossless compression can be done by finding repeating patterns in data and replacing each occurrence with a token; reverse to decompress. Incompressible data just doesn't have any useful level of patterns; I've seen some simplistic compressors actually produce a bigger file than they started with.
All the go-faster goodness (fast CPUs, dedicated hardware, efficient algorithms) means that these days the overheads for on-the-fly compression are relatively low, so it can be a performance enhancer rather than simply a capacity enhancer as it used to be in the world of low capacity storage devices..
Biting the hand that feeds IT © 1998–2019