To be fair to um,
It's exactly the same way all HDD sizes are advertised. Do apple and other mp3 player manufacturers do the same?
Creative has been successfully brought to book over the way it previously calculated hard drive-based MP3 player storage capacities. The company was accused of misrepresenting the storage capacity of its players by two punters, Vibhu Talwar and Patrick Finkelstein, who fomalised their complaint at the US District Court of …
It's exactly the same way all HDD sizes are advertised. Do apple and other mp3 player manufacturers do the same?
Creative are not the only ones to do this. For example, the HD DVD and Blu-ray camps were happy to go with this interpretation for the sake of making their disc capacities sound larger than they were.
Unfortunately by introducing the term 'Gibibyte' this interpretation is almost vindicated.
Seriously, El Reg, this is your fault, together with every other news and reviews site on the net. Why do you tow the manufacturers line and refer to these devices as having these fictitious storage capabilities on them?
Lead the way and call a new 60G mp3 player a 57G player if that's what it really is...
This basically amounts to "we'll pay up if you buy more of our products".
EDS would be proud of Creative.
Don't you mean a 930MB Zen Stone music player?
MarmiteToast writes: "EDS would be proud of Creative."
What's the problem? They are being "creative" :-)
Would Flash memory manufacturers really make 1,000,000,000 byte modules just for Creative? Seems a bit odd to me...
I would have thought it would be more likely that they make a real 1Gb module but because of it formatted to whichever file system, it becomes less. For example my Sony Ericsson W880i with 1Gb M2 card has a capacity of something a little less that 950Mb ish (my 1Gb Panasonic MP3 player is approximately the same)...
I suppose the difference here is that I know the real formatted capacity will be less than the 1Gb touted and so expect it, someone else less knowledgable about formatted capacities, bits/bytes/binary may not (I remember back in the good ole days when a 1.44Mb HD Floppy Disk formatted to something closer to 1.36 or 1.38Mb was it?... (cue comments regarding proper Floppy 8" or 5 1/4" floppy disks)
Not strictly true. Back in the dark ages a kilobyte was well established as 1024 bytes. When storage manufacturers reached a megabyte there was no formal definition so some went for a definition of 1000 kilobytes to steal a few percent in their size claims, and others went for 1000000 bytes to steal even more percent.
So actually you sometimes find 3 definitions of GB. Sometimes its a billion bytes, sometimes it's a million kilobytes, when actually in all but hard disk manufacturers claims we all know its 7.37% more than a billion bytes, or 4.97% more than a million kilobytes.
Whatever salesman thought this up in the 70s should be given a medal.
would have got rid of the ambiguity on what something is that is as fundamental as a k
still if you want to decimalise bytes doesn't that mean there should be 10 bits in a byte ?
"Giga" is an SI unit meaning 1,000,000,000 - used in every walk of life.
"Gibi" is an IEC unit meaning 1,073,741,824 - used for calculating storage, and has been standard since 1999.
The judge of this case is an idiot and the writer of the article needs to apply a service pack to his brain.
Seriously, we need an El Reg scale for hard drive sizes to solve the confusion. What's the size of the original leak of the Paris Hilton tape?
Also, please don't call a gigabyte "a billion bytes" some of us are actually British (and proud) and know that 10^9 is an AMERICAN billion, a BRITISH billion is actually 10^12 (a thousand million verses a million million)
Did you actually read the article? It specifically said HARD DISC mp3 players; not flash memory...
I don't see that Creative have done anything wrong.
Anyone remember the IEC standardising the values kibibytes, Mebibytes etc to remove just this sort of problem.
1 KB should be 1000 bytes as per normal SI rules. I'm surprised Creative caved in really.
This is a long standing bugbear. It's been long established, rightly or wrongly, that fs storage is measured using M=10^6 and G=10^9 whereas RAM has always been K=2^10 and M=2^20 and G=2^30 (more due to their intrinsic design requirements). I guess it's debatable which one applies to flash memory. Clearly the marketroids are going to go for the one that makes it look bigger. Traditionally communications and bitrates have always been powers of 10.
We have long had the prefix K=1024 so as not to confuse it with the SI unit of k=1000. But after that all bets are off. The other prefixes, whether "G" or "giga" are strictly SI units. So Creative are correct about using powers of 10 to describe gigabytes. Some smart folk came up with Ki/Mi/Gi (kibi/mibi/gibi and so on) for the powers of two. While they are a bit ugly, perhaps it's time to start using them. Linux already uses them for its utilities. Maybe The Reg can champion their use?!
So, I have a 4Gb UFD that is 4043Mb or 3948Mib.
And before that (from 1985 forward), the megabyte was 1024 kilobytes and there was no MiB. Hard disk manufacturers initially labelled their disks in the proper byte notation, until some jerkoff thought of the scheme and invented some "Joe User" issue with the real byte.
Which is all irrelevant nowadays, but back in the day the megabyte was over $15, and getting stuffed for 7% was not a small deal.
Today, however, with 1024 megabytes coming in at pennies, it is really irrelevant to continue arguing about GBs and GiBs. I will remain true to the original GB, since gibbing is quite a different activity.
...and the capacity of a backup tape is 50Gb... except it isn't, because its typically only half (or less) and the manufacturers just lie about the real capacity hoping that we won't find or read the small print. Some are getting better at it, but not all of them.
Sorry, I missed 2 words, consider my argument null and void, I'll get back to my flaming tar pit now...
Yes I had heard this before, but funnily enough, my Oz dictionary says a Billion as 10^12 is "becoming obsolete". I'm sure they don't mean all British though ... surely ...
But I like the 3rd definition - "a large amount" - as in, my 60GB MP3 player holds 60 large amounts of music.
billion as 10^12 "becoming obsolete"
Unfortunately the crimes committed against our beautiful language by our Septic Cousins seem to stick - but only in the English speaking world. The rest of the world still calls 10^9 milliard and 10^12 billion.
I was going to post: "First one to mention the universally hated KIRBYBYTES and its evil offspring as a 'standard' shall die a fiery death" ... except someone already did.
Computers can't understand base-10 natively, using kilo-base-1024 has an actual reason for its existence. The IEC and the metric nazis think otherwise, so they began this stupid "-bibyte" thing which seems to only benefit dodgy HDD vendors. No one in its right mind uses base-1000 bytes, RAM manufacturers would never ever be able to get away with such a thing as a "512 million bytes RAM" as it would bring any PC crashing down.
Fortunately the -bibyte trash has not infected standard computer courses; I just hope some more sensible people go into the IEC and drop the stupid convention.
hard drive manufacturers can, but Creative cant?
Anyway, a Gigabyte is 1,000,000,000 bytes. Read up on Gibibyte.
Are you all soft in the head? Have any of you any idea how computers have always historically addressed memory? Forget legal definitions and marketing-speak and go back and do Computing Part 1.
A Gigabyte IS 1024 Megabytes. End of story. There is no way for a digital machine to octally address (or store) this fantasy figure of 1000,000,000.
1Kb = 1024 bytes
1Mb = 1024 Kb
1Gb = 1024 Mb
This is THE empirical definition. Everything else is unforgivable pathetic ignorance.
Waaay back when computers came with a floppy drive and 640K *was* enough for everybody, marketing used to call HDDs "memory". They used powers-of-two megabytes.
Then a few called a gigabyte a thousand megabytes. Not all, and there was some outcry but since it wasn't prevalent, it was not persued legally. So more HDD marketers used thousand meg (where a meg was a power-of-two megabyte) until they all used it.
Then a few used powers-of-two kilobytes and a gig was 1,000,000 of them.
And we progress on until now it's all powers-of-ten.
I guess the difference between a megabyte and a million bytes wasn't worth anything, but the difference gets bitter and now we're looking at terrabytes, that's a BIG difference and so there's more "need" to use the lower figure.
But remember that computer memory IS in powers-of-two and that HDD used to be called by HDD and PC manufacturers "memory".
And so this suit isn't as silly as some would have it.
"A Gigabyte IS 1024 Megabytes. End of story. There is no way for a digital machine to octally address (or store) this fantasy figure of 1000,000,000."
Oh thanks for putting the exact reason for using base-1024. I actually think that the base-1000 "standard" came to be because newer CompSci generations aren't literate enough on how memory is addressed inside a computer.
0x0400 bytes = 1 kilobyte.
0x03e8 bytes = 1000 bytes
Just try to map that into fixed-bitlength integers and you'll find your answer quite fast. Plus, all the argument of storage needing to be base-1000 is garbage, as in practice, secondary storage is basically an extension of RAM, as data is loaded from/stored to HDD from RAM, so the same principles apply. Case in point: 512-byte blocks/sectors, 1024 (1kb) blocks, 4096 blocks... No sane filesystem ever uses base-1000 blocks.
"but only in the English speaking world. The rest of the world still calls 10^9 milliard and 10^12 billion."
For some reason (don't ask me), the same happens in Brazil vs. Portugal; in Brazil, 1 billion is 10^9, while in Portugal it's 10^12. It seems like Spanish-speaking countries here in the Americas follow the European pattern though...
"No sane filesystem ever uses base-1000 blocks."
No, not in the real world, though it is true there are a few non-standard one-off custom filesystems which are designed to store sparse data in scientific applications and they do have some very weird block assignments specific to the data structure they're trying to store and read. Just because it makes sense to them at the time.
But meanwhile, back to the real world of hardware. Memory (RAM or magnetic) is assigned addresses in 8-bit blocks (or multiples thereoff) and all computing REAL standard and all operating systems and hardware are designed this way. It's the ultimate in backward compatibility.
Redefining 60 years of computer science is like re-specifying the wheel. There's really no point. So I wish they would start actually educating CompSci students these days instead of allowing them their backward little innumerate fantasies. If you can't do binary arithmetic or count in your head as well in base 6 or 8 as you can in base 10 then you really ought not to be doing "Computer Science". Just my own personal prejudice but one I find holds true.
I am aghast at the quality of CompSci grads these days. I met one who didn't know what a weighted average was or how to calculate it.
OK -- ye olde fashioned bloke rant over.
that would be fun, using a 1000 byte cluster on a 512 byte per sector storage device...
The problem here is that the general public do not understand the tech reasoning. We all learned in school that 1 kilogram = 1000 gramms. 1 kilometer = 1000 meters. Everyone that watched the Battle of Britain, "Angels 10" meant 10000 feet.
Then we start in IT and learn that for technical reasons, 1 kb is 1024, (10^2) due to the binary nature of a computer.
The, as mentionned before above, storage manufacturers start selling drives as 40, 80, 120 gigabytes as a marketing ploy as they have found out that it's better to give a nice full number than the real capacity (For John Q. Luser, for the same price, would you buy a 40 gb drive or a 38 gb drive?).
Some drives actually seemed to stick to the truth : I'm looking at a trio of old IBM Deathstars in my parts bin - 20.5, 46.1 and 61.5 gb respectivly, a quantum fireball at 20.5 gb, and guilty looking Maxtor "DiamondMax Plus 9 200GB ATA / 133 HDD".
Changing from Base 2 to Base 10 allowed lower capacity drives to be "uprated" - and oversold.
The stinger is when you install 3 TB of disk storage, you actually find 2.79 Tb available. Marketing moths have eaten a 210 Gb hole in my storage...
Changing to processors, Cyrix invented the infamous "Pentium Rating" (fond memories of my first Cyrix 6x86 PR200+), followed by AMD. Intel stuck to their standard frequency ratings until they too fell to the marketing sirens and found it easier to market a T5500, Q6600 and god knows what rather than PIV 3.06Ghz HT.
At least with intel you knew what you were buying. Now it's all dumbed down.
Back to storage - you do not really know what you are really getting: your real storage is calculated in base 10. Your (possibilty unknown) needs are set in base 2. They are not compatible. It's time that the storage vendors either relable clearly their capacity (it's your player 8 or 7.3 gb storage?).
RAM vendors do not have this "underselling" problems. You want 2 Gb, god darn it, you'll get 2 gb (2^(32-1) bytes). WTF can't storage vendors do the same?
... if what they are advertising is the hard disc capacity.
If they are advertising capacity as Windows would see it (Gibi), then that's another matter.
I wish Microsoft would go down the Linux road* and start quoting all their figures as MiB GiB etc. The extra i would likely go unnoticed to many or educate others, and resolve complaints when manufactures quote GB when the OS quotes GiB.
* - and no, I'm not going all Linux fanboy here. I run XP and much prefer it to Linux on the desktop, but it's just one of those things they do a little better in Penguin world.
"Are you all soft in the head? Have any of you any idea how computers have always historically addressed memory?"
You're talking memory.
Storage has been long established to be quoted correctly as per the SI & IEEE definition of Mega, Giga, etc.
The same is also true of networking.
The fault is in software developers using the binary form to not just calculate memory but to represent this to the user, and the misuse of the standard scientific terms. i.e. "ah well, 1024 is near enough 1000, so kilo will do".
...I swear to god, anybody repeating this "gibibyte" bullshit needs a punch in the face. A real punch - a big, movie-style POW that lands them on the ground, face up, and silences the whole f*cking bar.
'Formatted capacity' is the fictional explanation provided by the marketing 'genius' who first thought of advertising storage sizes using a 1,000-byte kilobyte. If the process of formatting ate up space, the amount lost would differ from platform to platform.
"You're talking memory."
Storage IS memory, and is read and written in precisely the same way.
Magnetic storage is formatted in 8-bit blocks (cylinders, heads, etc) and goes 512, 1024, 2048 and so on.
Please don't tell me about how storage works: I have been writing software drivers for years and I know how it works. A Gigabyte is 1024 megabytes regardless of whether it's silicon or disk.
Go and read up on hardware interrupts.
A Gigabyte is a Gigabyte — it's a constant. It doesn't change on the whims of marketing wanks.
see : http://xkcd.com/394/
I think I nailed this back in 2000 when I wrote a comment on the subject in my country's main computing newsgroup.
We have to be _consistent_ about the use of kilo, mega, giga, etc.
kilo means 1000 (one thousand), whether we're talking bytes, hertz, grams, bits, metres, or Pascals(pressure).
Otherwise, we wind up in lala-land, where you don't know if someone means 1024 or 1000 when they say kilo-[some unit].
If you want to specify 1024 and powers thereof, use the kibi-prefix and relatives.
Let's say your internet connection speed is 1 megabit/s.
Is that 1000000 bits/s, or 1048576 bits/s?
Oddly enough IBM used to sell 1e6 byte Mbytes for years and years (their dictionary's definition of Kb and Mb for storage is metric (power of 10, not power of 2)) and AFAIK no-one ever sued them for that.
Datacom's K and M are metric by definition that is a DS0's 64kbps, are 64000 bps and not 65536bps.
"Intel stuck to their standard frequency ratings until they too fell to the marketing sirens and found it easier to market a T5500, Q6600 and god knows what rather than PIV 3.06Ghz HT."
Uhm, not quite. Intel was VERY happy to keep selling CPUs based on clock speed while their architectures required VERY high clock speeds to do ANYTHING (aka: P4s of all strips).
Once they FINALLY realized they were getting there butts kicked by AMD and it's impact on server sells (it only took management 3 freaking years to realize that!), they designed a new architecture that was MUCH more efficient, but it also ran at MUCH slower clock speeds.
After shoving 3.6Ghz this and 3.6Ghz that down our throat for years, they didn't feel like spending the money to explain that while a P4 at 3.6Ghz was FAST!!!! (insert marketing hype here), the new chip at 2.0Ghz was actually much FASTER!!!!. It was just much easier to adapt AMDs idea and drop the clock speed entirely.
Of course, if you have looked at Intel's CPU line up lately, things are definitely NOT any clearer now than they where.
And as for the topic at hand, the whole MB = 1000 kB was all done so a specific drive manufacture could over state their capacity by a few percent and gain an advantage over competitors. The other manufactures quickly jumped on the bandwagon because the first lier wasn't quickly took out and hung.
It's all like the way air compressors where rated in power in the US: initially, everyone was using the actual continuous HP rating of the motor to rate the air compressors power. Then one manufacture (Sears if my memory is correct) decided they could make THEIR air compressors look better if they used a "different" hp rating... So they came up with the whole "peak hp" rating thing that is about 80% high than the motor can actually put out (hell, it's generally rates them higher than breaker the compressor is plugged into can actually supply!)
People eventually got wind of it and got tired of it. Now, you are starting to see most all compressors rated on an ISO specific test (back to the more rating for all intents and purposes!) and the "peak hp" small print is disappearing across the board.
Oh, and as for consistency, I agree FULLY!!! In a base 10 system, we should be using base ten powers. But in a base 2 system (your computer), we should be using base 2 because your computer CAN'T use a base 10 system!
fscked by SHA-1 collision? Not so fast, says Linus Torvalds