Peter Whibberley, senior research scientist at the NPL
It's all Whibberley Whobberley, Timey Wimey...
Most people are over 2016 - although god knows what next year has in store. But unfortunately they'll have to endure it bit longer: one second longer to be precise. This year the National Physical Laboratory (NPL) will insert the leap second before midnight, in order to keep the timescale based on atomic clocks in sync with …
Not allowed to vote yet but an absolutely brilliant comment.
For those not aware, the statement is part of a line in the Dr Who episode "Blink" - one of my favourites, featuring the Weeping Angels - in which the Doctor (David Tennant) gets trapped in 1969. When trying to explain how time works to Carey Mulligan he struggles and is reduced to saying
"People don't understand time... it's not what you think it is - its' complicated, very complicated.
People assume that time is a straight progression from cause to effect, but actually, from a non linear, non subjective viewpoint, it's more like a big ball of wibbly-wobbly, timey-wimey stuff."
Absolutely brilliant - and another virtual up vote from me.
Leap seconds are not a big issue and should not be a problem for any system designer or developer who is paying attention to what they are doing.
The C tm struct, used when converting between UNIX time (seconds since 1/1/1970) and human readable time has defined the seconds field as containing a value in the range 0-61, this allowing for leap seconds, since at least 1996 so nobody can complain that this is a new issue that they don't or won't unserstand. Similarly NTP, the Internet Time Protocol, has always handled leap seconds.
Any OS worth the name will have its own implementation of UNIX's ntpd (the client interface for NTP) and should also provide time manipulation functions equivalent to the POSIX set as part of the supported language's standard libraries.
All this means that there should be no need for applications code to handle leap seconds. Ever.
Hopefully no one rolls their own time functions these days. The problem is when you use the time. You might find things happening twice, or two events having the same timestamp or causality appearing to be broken (if time went backwards briefly). Depending on your system this might be "super bad".
I think the crux of it is you tend to assume time goes inexorably forward. It doesn't do that with a positive leap second (if you convert out of UTC).
This. We've got very good at handling these now. In fact a lot of installations don't bother to do the 61st second any more. Rather than risk bad application code screwing up with the unexpected second, the underlying system applies a smudge factor to all the seconds around the leap second, spreading the impact over minutes or even hours. No unexpected second for badly written software to screw up with but you stay in sync just as effectively. Quite a graceful solution, all told.
So the conversion routines will now need to know about and account for the new leap second? And all those old Unixes unaware of it will be off one second?
You are wrong on both accounts. Firstly you have to understand the various concepts of "time" that are in use, and that suggests you don’t. We have:
Calendar time, this is what time_t and similar operates upon and what most people think of, and here each day ALWAYS has 60*60*24 = 86400 seconds, and the calculation of date is based upon the Gregorian calendar for leap years. The application of leap seconds has absolutely no impact upon such calculations, in effect it is just a step adjustment of time-of-day to keep it within 1 second of mean solar time as defined from the Earth's rotation and orbit. The difference in time between points is computed ignoring leap seconds, so it is actually "wrong" if you need an accurate time difference across such an event.
Ephemeris time and all of its variants (GPS time, TDT, etc) where you have some defined epoch and time is measured in fixed seconds based on atomic time from that point. Each of such systems has no leap seconds and no discontinuities, so time differences are always right. However, to equate such a linear time to calendar time you do need to know the history of leap seconds and for that you would need a table of data. For web-connected machines you can get it from here along with the finer details of the Earth's orientation:
http://www.usno.navy.mil/USNO/earth-orientation/eo-products
Finally this is exactly the same for any OS, it is just that historically UNIX has handled time in a sane and correct manner (e.g. system clock on UTC, NTP adjustment slewing time normally to avoids steps and to minimise the error w.r.t several time servers, NTP signalling leap seconds before they occur, etc) even if code monkeys sometimes get it wrong. However windows has had pretty poor ways of doing things (e.g. CMOS clock keeping local daylight-adjusted time, time steps once per week by default based on just the MS time server to keep the lock within minutes of correct time, etc).
"Finally this is exactly the same for any OS, it is just that historically UNIX has handled time in a sane and correct manner (e.g. system clock on UTC, NTP adjustment slewing time normally to avoids steps and to minimise the error w.r.t several time servers, NTP signalling leap seconds before they occur, etc) even if code monkeys sometimes get it wrong. However windows has had pretty poor ways of doing things (e.g. CMOS clock keeping local daylight-adjusted time, time steps once per week by default based on just the MS time server to keep the lock within minutes of correct time, etc)."
That comes from the MS-DOS days when computers were completely standalone so had nothing but internal reckoning to work with, so they used local time due to the KISS principle. IIRC, NT-based OS's internally use UTC now. It just doesn't set the RTC to UTC because it doesn't really have to; timezone setting is done during setup and it can keep track from there.
But still, there are applications that require BOTH calendar and ephemeral time to be both accurate and precise, putting them in a bit of a bind.
"historically UNIX has handled time in a sane and correct manner ...However windows has had pretty poor ways of doing things"
What you forgot was that the A/C obviously comes from the Windows world where everything from Redmond, BSODs & all, is perfect so the Unix way must be wrong.
"I'll just slip in an extra second. What could possibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrpossibly go wrong?
ong?
possibly go wr
ERR: 31 Dec 2016 23:59:60 - Temporal buffer underflow - Rebooting Universe...
Have you been at that Terry Pratchett again? (I'm re-reading Hogfather,and I seem to remember +++Divide By Cucumber Error. Please Reinstall Universe And Reboot +++ is from that isn't it, although I haven't got that far in the book this time round? (ducks under table to avoid incoming flames for getting the book wrong))
Here we are only allowed to set off firework(bomb)s between 18:00 CET 2016-12-31 and 02:00 CET 2017-01-01. Not that anybody is paying any attention to that, but the extra second may be quite handy.
BTW. Scheduling the transition from 'wintertime' to 'summertime' v.v. has been a problem in broadcasting ever since the transmissions went 7/24.
I work in the field of TV and I can honestly say I've never had a problem with wintertime/summertime changes,
On a short day you just need to schedule 23 hours of programmes and have software that is aware of the short day.
On a long day you just need to schedule 25 hours of programmes and have software that is aware of the long day.
In the last 20+ years I've been on duty for most of the clock changes and I've had my feet up drinking tea.
Buy better software my friend, there is stuff out there that works...
A) I've seen what can happen with 8 year out of date BSD.
B) NTP, and "drift" is the solution where you have (sadly out of date OS)
For a couple of years we had a swat team floating around for leap second events, *just* in case -- lately though, we know where the hiccups are and we have notifications designed to make people pay attention - there is one DB that gets halted (still) since it *still* does stupid things when the slew starts underneath it. I'll not talk about certain networking OS companies that still haven't fixed clustering software to cope with slew/skew and inline adjustments that they implemented in other parts of the OS....
"they have to be manually programmed into computers and getting them wrong can cause loss of synchronisation"
Shit even I would have worked out the wrinkles in any automation code to do that - even if I had to do it at whichever midnight on new years eve was appropriate. Surely in 2017/2016/2017 no-one runs anything serious that relies on a manual time change?
Shirley?
Surely in 2017/2016/2017 no-one runs anything serious that relies on a manual time change?"
Well, there are a couple of ordinary analogue clocks here. I'm going to have to adjust those manually.
But being serious, it seems to me that inserting the extra second before midnight - hence your 2017/2016/2017 point - is not optimum, precisely because it could affect multiple points in the date and time in any software that reads the date and time.
Surely the more logical thing would be to insert the extra second 59 seconds earlier, so that the change only affects one component of the time when looked at as a string.
Well, there are a couple of ordinary analogue clocks here. I'm going to have to adjust those manually.
But being serious...
That's perfectly serious for me. I make a point of keeping my wristwatch within one second of the correct time so there is never any debate about if something is late etc. It's easy enough to maintain in practice since it naturally gains at a known rate of ~200ms per week. I do need to specifically account for leap seconds though.
No, they only need programmed in to stand-alone computers, anything using NTP gets the updates automatically as NTP announces the pending leap-second for 24 hours before it happens.
Similarly if you get time from GPS it has a field that tells you of the coming leap second for days, maybe months, before it is due. Assuming of course you don't have some shitty GPS receiver that hides the information from you because the firmware monkeys just don't understand it...
Google have released their smearing ntp servers to the public, for people who don't want to deal with 61 seconds our repeated seconds.
Their blog has a nice write up of the problem and why they solved it that way.
https://cloudplatform.googleblog.com/2016/11/making-every-leap-second-count-with-our-new-public-NTP-servers.html
All very well if you have shit software to manage, but it means they have the wrong time for most of that day. Now they might not care, you might not care, but there are many cases when you need to know the right time to millisecond or better accuracy.
Ironically folk who program for Windows have learned to be tolerant of time-jumps because typically they are updated one per week or so by SNTP which applies a time-step. Where as UNIX/Linux has an OS that handles it properly (except when someone changes code and does not test it) but many code monkeys who never test/debug their code against time steps because they don't see it often.
This post has been deleted by its author
The correction won't be "just before midnight" at local time here. And in China, it won't even be a public holidy.
Which leads me to wonder if there is any advantage to having a time correction at midnight on a public holiday, for anybody?
"Which leads me to wonder if there is any advantage to having a time correction at midnight on a public holiday, for anybody?"
Probably not. I think it's an historical assumption that much of the world won't be at work, their computers are off and it's less likely to affect them. Except those are the people least likely to be affected anyway. The sort of systems where precise time is important is likely to still be running and doing whatever job job it's supposed to be doing, holiday or not. Also, when they chose the date, the countries with the most computers pretty much all used the standard western calendar.
'However, some experts want to end the practice of coupling of Universal Co-ordinated Time (UTC) with astronomical time – the practice that gave us the concept of leap seconds.'
Which seems to be adapting humans to computers' whims rather than the other way round. Which wasn't the future I was sold.
Okay it was the one in the Terminator films. Presumably Skynet couldn't cope with Leap Seconds and threw a fit.
My current place of work said "we pay you hourly, but don't account for the hour change from winter/summer time... but that's ok, it will all work out in the long run".
Well, no, not if you work one and not the other. I've kept honest and avoided both... but I could make a small profit booking holidays at the right time. :P
“Because leap seconds are only introduced sporadically, they have to be manually programmed into computers..."
It means that we still have one weapon against the Rise Of The Machines. Let's keep it that way folks!
(and anyway, with the replacement of analogue TV with digital, the New Year "Bongs" are delayed by a second or so because "codecs")
I think that's because whilst the length of the second is fixed, and the length of teh day that we use in everday life is a fixed number of seconds, since those terms were defined, the Earth now spins somewhat more slowly, so even if the Earths rotation stopped getting any slower all of a sudden, we'd still need leap seconds every now and then. Just not quite as often.
It's amazing to see how people think when it comes to time... and how they believe computers handle it. Even more, when I see people think that the only operating systems are Linux and Microsoft.
Just because a new minute or new hour goes by, doesn't mean the extra second of time put into place goes away... it's there, forever. Computers don't see time in seconds, hours, minutes, etc. They typically see everything in milliseconds, and then people write code to translate it so it makes it easier for users to understand.
Time is linear, not circular. You have to go back to the simple math days and think of one long (infinite) math line marked off in milliseconds. When you tell a computer to go back to a certain point in time, it doesn't automatically know where to go. It subtracts a certain amount of time from the current time.
Insert a full second into this time line, it's like having multiple 1s, 12s, 15s, etc. in there. So subtract 8 from 12, and it doesn't land at 4, it will land at 7 for instance. So 12 - 8 = 7. The logic breaks things.
It's not a simple thing to code, because the math only hiccups if it crosses the point where you inserted the extra numbers (time). And you can't just 'highlight' or point at this marker... computers don't work this way.
The point is exacerbated by those who think you can insert a millisecond slowly into the time line. Think about it.
Those who think there isn't anything which marks time so precisely, I beg to differ. Many money transactions are based on the time money is requested, transferred, processed etc. Especially when these transactions are done over many different computer systems. They are marked by time precisely so they can be reconciled. This is done because it's possible you shop at one spot, at the same time your order at Amazon is processed at the same time your spouse uses the same bank card/credit card at another location. All at different amounts, all at nearly the same time, at different locations, on different computer systems, yet use the same banking account.
Yeah, so... those who think they know computer systems... just realize why some people go to school for 4+ years to earn a compE and others pick up an MCSE class for 6 months.
You might be a master of one operating system, but in the larger world of computer systems, you've only graduated the 2nd grade.
"Yeah, so... those who think they know computer systems... just realize why some people go to school for 4+ years to earn a compE and others pick up an MCSE class for 6 months.
You might be a master of one operating system, but in the larger world of computer systems, you've only graduated the 2nd grade."
For systems that require BOTH accurate AND precise time measurements (a better example would be HFT where competing offers can be microseconds apart but only one can close the trade), those will usually have their own internal systems if they're THAT dependent on time. They'll also be usually positioned physically close to the exchanges because at those speeds the speed of light/electricity starts becoming a factor.