Y2K bug's decendents
So is this another example of Y2.01K bug or are we now up to Y2.02K bug?
A delayed Y2K bug has bitten hard at some 30 million holders of German debit and credit cards, making it impossible for them to use automatic teller machines and point-of-sale terminals since New Year's Day. Multiple news agencies said the outage stemmed from card chips that couldn't recognize the year 2010. The DSGV, an …
So is this another example of Y2.01K bug or are we now up to Y2.02K bug?
My guess is that that the processor has the ability to use BCD (Binary Coded Decimal) representation with each diigit in 4 bits. This would mean that, for example, the decimal number 76 would be represented by the hex number 0x76 (decimal 118) with four bits for the seven and four bits for the 6.
Now some poor programmer has used the instruction to store the year (in this case 10) as BCD so it is being stored internally as 0x10 (or 16 in decimal). However, when other subroutines are accessing the date they are treating it as a normal hex number and so are reading it as the year 16 instead of the year 10. The years 0 - 9 would be unaffected as they are the same whether using BCD or not.
How fast they can fix this will depend upon how much of their code treats it as BCD and how much doesn't.. Since BCD is rarely used today my guess is that the only code that is using BCD is the part that stores the date.
Now since I'm wearing my deerstalker and smoking my pipe (my new years resolution regarding the opium still stands) I will speculate as to who is responsible. This is the type of task that is given to new interns fresh out of college/university ie. "Write a subroutine to get the date from there and store it over here.". Unfortunately new interns tend to not actually know much so (s)he probably scanned the instruction set and used the first instruction (s)he came across that would do the job. Unfortunately for them it happened to be a BCD instruction.
A definite fail for the programmer.
What is so face-palming about this is the use of BCD in an embedded environment. Not only is the storage of the date in BCD less optimal in size (1 byte in BCD == decimal value up to 99 stored, whereas 1 byte in binary == decimal value up to 255), the code to handle the value would be less optimal too, using more instructions.
Size matters guys :)
It takes the same amount of code to add/subtract/increment/decrement BCD as it does for normal numbers. This is because it is implemented at the processor level as single instructions.
Back in the "good old days" of 8 bit processors BCD was more efficient when it came to displaying the results. This is because all processors had ADD and SUBTRACT instructions, a few might have MULTIPLY and I dont know of any that had a DIVIDE instruction (but no doubt I will be corrected). To divide 2 integers you had to use multiple instructions within a loop so it cost a lot of processor cycles.
A four digit number takes 2 bytes to store whether its BCD or a normal number but when it came to outputting the result for a normal number you had to do the following:
Divide by 10 and use the remainder as the units
Divide by 10 and use the remainder as the 10's number
Divide by 10 and use the remainder as the 100's number
Use whats left as the 1000's part.
On the other hand to display a 4 digit BCD number you had to:
AND the LSB with 0x0f and use the result as the units
Shift the LSB right 4 times and use the result as the 10's
AND the MSB with 0x0f and use the result as the 100's
Shift the MSB right 4 times and use the result as the 1000's
Each of the above operations on the BCD number may well be implemented on the processor as a single instruction so it was very efficient. On the other hand the normal number has to implement 3 divides which is very inefficient.
Thanks for the correction :)
I was thinking about the DAA instruction on the Z80 (old but reliable 8-bit cpu), The additional instruction to adjust any carry the operation may cause.
Wow, so I read this to mean that SpamAssassin has been blocking tons of mail that had 2010 maybe even in the body of the email?, which might have included a BUNCH of legitimate mail especially around Dec 2009.
Maybe a good time for both individuals and corporations to check their spam filters thoroughly.
Maybe SpamAssassin group should also review those filer rules...
The spamassassin rules triggered on the date HEADER
If was a regex along the lines of /20[1-9][0-9]-[0-9]-[0-3][0-9]/
The official patch simply changes the first part to: /20[2-9][0-9]- effectively putting it off for ten years, but there is already a proper fix in the works.
Those of us who actually run mail servers are smart enough to notice this start triggering through the haze of new years day and fixed it ourselves.
because time is a concept, and it can be conceptualised in different ways.
Really we should all start using dates in the format 2010-12-31 to stop some confusion, but confusion happens all over, as most date functions are in libraries instead of in the core of the programming language or system. So you do end up with a lot of rogue code, that is often hand rolled and not specified when it comes to dates and times.
The number of people who cannot tell the time is staggering. "01-02-10" What date is that? Without context there's no way to know. The format should be YYYY-MM-DD. But that doesn't work in America as the Yanks are so staggeringly dumb they will assume it to be YYYY-DD-MM (I am not kidding, we tested it out).
Of course, even if you do that you still can't tell the time. Any time without a time zone is invalid. End of discussion. So the full format should be "YYYY-MM-DD hh:mm:ss Z". It's more data, but at least you can be bloody sure that the hell the time is.
When it comes to UI, display it any way you want; it is merely a view on to the internal data.
A datetime is like Unicode data without knowing what the encoding is; utterly useless and probably going to lead to trouble.
>>>"01-02-10" What date is that? Without context there's no way to know. The format should be YYYY-MM-DD. But that doesn't work in America as the Yanks are so staggeringly dumb they will assume it to be YYYY-DD-MM (I am not kidding, we tested it out).
Not tested very well, apparently. Every Yank I work with here would read that as Jan 2 2010 - they work on the MM-DD-YYYY system.
And you just proved his point. First the date he put in was in the format unknown to us, it could have been DD-MM-YY, MM-DD-YY, YY-MM-DD or YY-DD-MM. He then says the date format used by everyone should be YYYY-MM-DD. But due to the ilogical way americans do the date format, they would assume it was actually YYYY-DD-MM. Now the date format that you used on the date that he provided you did MM-DD-YY, where as the date if he was actually writing it as a date would most probably have been DD-MM-YY, as it wouldnt have been in american format. But again, if it was in the format that he thinks should be used, you and everyone you asked, would have again been wrong.
This date bug seems to be causing more problems than the original supposed millenium bug which never really manifested...
"This date bug seems to be causing more problems than the original supposed millenium bug which ***never really manifested***..."
*** never really was reported to have manifested ***
People so easily forget that no company that wants to be respected would have reported any issues with the transition to year 2000 after all the time and money they spent trying to correct things.
An NDA and personal ethics prevent me from disclosing what broke with my then-employer, a Dow Jones-listed IT company.
Oh, or that anything broke
So the mainframe of the Berlin (Germany) fire department never really went down, leaving the department unable to respond to calls on New Year's Day 2000 -- to name only one of the more prominently publicized cases... I remember the slurry of indeed necessary fixes even to applications that had to be installed at the company I worked with at the time, applications one would not normally think vulnerable to a date-related bug, like Adobe Photoshop. And yes, the one machine we overlooked (there always is one...) indeed ran into trouble when it was re-activated from storage halfway through the year.
It 'never really manifested' because it was expected and a huge amount of hard work went on in the background to fix it first (believe me, I know!). This one wasn't expected. Think what Y2K would have been like if no preemptive work had been done.
Ah, roll on 2038...
First thing I saw go titsup for Y2K issues was in October 1991(!)
The reason why just about everything did not go titsup on the glorious day was 'cos some of us had spent a shitload of effort making sure it bloody wouldn't.
The words "thankless task" spring to mind.
It also looks like some people took the lazy-man's approach to fixing the Y2K problem and didn't actually increase the year to 4 digits.
I doubt laziness had anything to do with it, as I recall from work I did prior to Y2K, management would have been told the cost to extend year fields from 2 bytes to 4 (database redefines, API changes etc..) and would have said "Isn't there something we can do more quickly/cheaply?" Hence packed years relative to 1900 and variations on that theme, push the problem into the long grass!
teach me to read el Reg before sitting down to work
Just fixed my SA box before reading the story that told me how to do it.
Oh well been meaning to learn to fiddle with the damn thing anyway.
So far we got :
Banks (still waiting for feedback)
Symantec SEP (still waiting for feedback)
Kaspersy (still wibbling and wobbling about this one)
SpamAssassin (quick and easy to fix)
What will be next? Hopefully not airliners or some thermonuclear devices... or RotM stuff happening... is the Montana bunker standing by?
Time to bugger off somewhere else quick quick.
To reproduce the Y2K bug so soon after the original problem and fearmongering over 2000
And this is another reason for not obsoleting cash and cheques quite yet.
In house written software developed over years -> millions of lines of code.
Manager thinks "Let's re-develop with 4 digit dates."
Works out how many files involved (lots)
Realises *actual* size of code base
Works out how many staff hours to re-write.
Works out how many staff hours to re-test.
Realises this gives *very* little benefit.
Decides they can live with it after all.
Y2K was not a *major* problem because a *lot* of people bust their collective asses to make sure it was'nt.
If you assume the problem was with the card and not the servers certain wise assumptions on the part of the programmer/specification writer could have caused this.
As little as 12 years ago smartcard developers writing banking OSs found themselves with ICs of a few K of ROM, 2Kbytes EEPROM and 256 bytes of RAM. Code had to be written in assembler to fit and often was squeezed in with only 1 or 2 bytes to spare. It is easy to imagine that a programmer decided to avoid the Y2K bug by coding the year as the actual year -10 in one EEPROM byte. Due to the consistent increasing ability of attacks on ICs both the hardware and software on banking cards is always being updated. Who would have thought they would still be using the same ROM or the same EEPROM data storage format 12-15 years later.
I don't really know the cause of the issue but I suspect that if you go back to the time the code was programmed the decisions made were perfectly reasonable.
Whether the last 2 digits of the year are encoded as BCD or a normal number it will still take up the same amount of storage - one byte.
The code must have been written since the year 2000 as the only years it will work for are those ending in 00 to 09. It will not work for any other year.
It is over 15 years since I last looked at the specs of an EPOS protocol (it was tritel 3 and was used by the Swiss credit card companies) but they only used 2 digits for the date and provided date ranges that you used to determine if it was 1900 + dgits or 2000 + digits. The year 99 would obviously fall into the 1900+ so if the code was in use then it would see the date as 2053 (1900 + 153) instead of 1999 and this bug would have been sorted then.
Is this the same bug that has shot Bank of Queensland in the foot so recently?
If so, then booyah take that the rest of world - we Australians always come first - including in cocking up!
Wonderful, just what we needed. Millions of angry Germans trashing their cards and going back to paper money.
This problem is a lot more serious than you might think. Most Germans do not trust electronic cards at all. I know, I live next to the border and I can promise you that once you've driven 30 kilometers into that country you can forget about using credit cards - nobody takes them. It's paper money all the way.
There is a little supermarket not 5 kms from where I live that only started accepting credit cards last year.
I wonder how much longer it will take to be able to use my Visa in Freiburg for something else than getting money from an ATM.
I live in Germany and i actually find that my German friends use their cards more often then i do. Although i live i Northern Germany which has always been that more progressive then the south so maybe thats it...
When it comes to Visa/Mastercard, I agree, the Germans barely know what one is. For example, the 2 main electronic hardware stores, Media Markt and Saturn, where you buy plasma TVs, etc, neither accepts a credit card! Try opening a shop in the UK, or USA, where you sell goods that cost thousands of Euros/Pounds/Dollars, but don't accept payment by credit card.
fscked by SHA-1 collision? Not so fast, says Linus Torvalds