"Earth Orientation Centre of the International Earth Rotation Service "
Brilliant! Where do I send my CV?
Time Lords at the Earth Orientation Center of the International Earth Rotation Service have decided we need an extra second in 2015, thanks to slowing in the Earth's rotation. Notice of the extra second says it will be slotted in on June 30th, when the clock will tick over from 23:59:59 to 23:59:60 before ticking over again to …
Well if we are going to be pedantic about this, the tilt is actually about 23° 26′ 21.448″ − 4680.93″ T − 1.55″ T^2 + 1999.25″ T^3 − 51.38″ T^4 − 249.67″ T^5 − 39.05″ T^6 + 7.12″ T^7 + 27.87″ T^8 + 5.79″ T^9 + 2.45″ T^10 where here T is multiples of 10,000 Julian years from J2000.0, but that's still an approximation (apparently).
For my personal requirements this level of precision is not necessary, a spill rate of less than three mouthfuls of beer per pint will be satisfactory.
The last time this happened, the Amadeus airline booking system , Hadoop and Linux servers around the world struck trouble, probably because they weren't set up to cope with the extra second.
Rather because nobody can be bothered to check the assumption, read the specs, think things through (true?) then test before shitting code into production.
I'm not entirely clear on why this should be a problem for ordinary computers.
Some central time reference goes:
2015 June 30, 23h 59m 59s
2015 June 30, 23h 59m 60s
2015 July 1, 0h 0m 0s
My computer goes:
2015 June 30, 23h 59m 59s
2015 July 1, 0h 0m 0s
2015 July 1, 0h 0m 1s
Sometime later the NTP update on my system notices that the clock is running 1 second fast, and gradually adjusts, as normal. Unless it happens to be referencing the external service at exactly midnight (easy to avoid), where's the problem?
> where's the problem?
the problem is, NTP doesn't work like that. It proactively notifies the client of an upcoming leap second, and the client should attempt to deal with it by inserting or deleting an extra second, not by just discovering the clock is wrong by a second at some point afterward.
...but if the client doesn't deal with it like that then the clock is wrong until ntp corrects it using the normal mechanism.
In normal use computer clocks drift and will regularly get 1 second corrections, things like Amadeus may well be second sensitive with carefully synchronised real time clocks everywhere but for most systems this will happen frequently and probably with larger amounts.
Yes, if your software doesn't even try to handle leap seconds then it will just notice the discrepancy after midnight and correct it in the usual way.
However, if your software tries to do something clever with leap seconds, but was never properly tested, then it might segfault or lock up or anything. I think that's what once happened with some Linux systems.
This is another reason why a good order to do things is: first, implement the tests; second, test the tests; third, implement the thing you're testing. That way you don't accidentally end up with untested code in a production system.
Instead of adding an extra second, why not make the seconds (at the internet reference clock) happen more slowly for a short period of time - say 1/30 of a second slower for 30 seconds before midnight - then the internet time will have been adjusted to the 'correct' time?
Not sure about this, if my machine has been off for a while and I turn it on, then the clock corrects much faster than your statement would imply.
There's a threshold value, 900 secs IIRC, if the clock is out by more than that there'll be a step change to fix it.
Similarly we get an hour change twice a year which also happens nice and quickly.
That's a presentation change, the actual clock doesn't change (well, shouldn't change), there's just a DST flag which causes an hour to be added, or not, to the returned value.
That only works if you have the same number of people pulling the west side of the building at exactly the same moment. Otherwise you compress/stretch the building so that any servers lined up in an E/W orientation will have a variable time differential. Indeed, they might even 'bounce' back and forth across second boundaries causing great confusion.
However, realy astute designers would have thought of this and put all the servers on an N/S line.
"Google, typically, does things its own way by stretching seconds rather than inserting an extra one, suggesting this approach works because it's hard to log events that take place on an inserted second"
So I presume in a leap year Google stretches days because it's hard to log events that take place on an inserted day.
When working on the banking systems a few years back, most of the dates were calculated on a Julian-esque fashion, so Christmas day 2015 will be 115359 (adding a digit at the beginning was the 'fix' for Y2K issues so year 115, day 359) whilst Christmas day 2016 will be 116360 due to the extra day earlier in the year.
It really started to get complicated when they were trying to calculate daily interest from an annual rate and someone decided that the way the contracts were written meant that we couldn't work it out on 365 but had to work out if the period in consideration was in a leap year...
Still, all good fun, and usually best sorted with the aid of liquid refreshment!
We're already off by several hours each year due to our orbit not being an exact number of days anyway. The point of the time servers was to prevent machines from bugging out when there is a difference in time between systems; futzing about to be pedantic like this only makes things worse.
I wouldn't mind if they added extra seconds each day to compensate enough to get rid of leap years, but this feels like re-arranging deck chairs on the Titanic: a pointless task that is only going to get in the way of people doing something useful.
Biting the hand that feeds IT © 1998–2019