"What can we do about that, Google? Eh? "
Please, please don't tempt them.
Google has to lie to computers in order not to upset them with the vagaries of earthly time. The text-ads colossus has just written a blog post on how it adjusts its computer systems to deal with leap seconds – by lying to them a bit with a technique it calls "leap smear", where the search giant adds a few milliseconds bit by …
"What can we do about that, Google? Eh? "
Please, please don't tempt them.
So much for SI units which are supposed to avoid gharstly conversion factors.
I prefer astronomical time because I deal in time-of-day mostly. For me a second is a 1/86400 of a day however long that is. Sadly I don't know how long a second is till the day has already happened... :-) but it works OK for me so far. (Of course I'm lying, a second is really what my computer or telephone says that it is which is something more obscure but more useful).
If we think it is embarrassing when we mix up metric and imperial units for international engineering projects, wait till there is confusion between the CPIM 1967 second, the CPIM 1997 second and the CPIM 199 second and the CPIM 2036 second, etc etc, with it's miniscule correction factors.
Get those mixed and you could end up with half your star-drive in the wrong parallel universe, which would be an embarrassing entrance in the trans-reality community! (But not unheard of).
"We can lie to the computer clock, but we can't lie to ourselves forever though. We're still little collections of carbon molecules spinning around on some rock in the abyss. What can we do about that, Google? Eh? Thought so."
I'm sure they are working on that.
But I'm the centre of my universe, and that's all that matters
It is a simple matter of allowing more than 60 seconds in a minute when there is a leap second in a year. Not that hard, when your time is an offset from a fixed date anyway.
coding part is perhaps simple as long as you operate on "time from epoch" only. However humans don't operate on time looking like "1313594349" ; removing all conversions to seconds in a minute or fixing them to tolerate 61st second would be insanely hard and convincing users that this is what UTC time actually looks like - next to impossible.
Besides, why would anyone bother if GMT and all local timezones do not not have this extra second. Anonymous since I don't want everyone know I'm not sure about this part ;)
I saw evidence of this on my virtual machine server when this had a dodgy internal clock which needed moving about 0.2 seconds every few hours, when it moved 1.2 seconds immediately after syncronising to an NTP server after a leap second on its logs.
The fact that doing something correctly is hard doesn't stop bodging it being a kludge. Probably easiest from a coding point of view to design things which need to synchronise occasionally to treat being out by plus or minus a second and a bit as not being a show stopper.
If you do as suggested above, in order to interpret a set number of seconds since the zero-date as a human-readable time/date would then require you to reference a table of every leap-second prior to the time being requested from the system, updating the list every time a new leap-second. This is because, unlike leap-days, there is no rigid formula for calculating when a leap-second will be required - it's chaotic.
So Google "solve" the leap second problem by using clocks that are deliberately broken so that they aren't accurate enough to be affected. A 19th-century solution to a 21st-century problem.
But if they simply ignored the problem and all their servers found themselves running 1 second fast, wouldn't the mechanisms in the NTP protocol gradually eliminate the error anyway? Why do they feel the need to solve the problem before it exists?
(Oh, and "covering an orbit of 150 million kilometers (93 million miles) every year"? Radius, meet circumference.)
NTP works fine with this because it was designed by folk who know what are are doing.
The underlying problem is that computers (or more precisely their clocks and calendar libraries) *assume* time is always 24*60*60 seconds per day, so the time_t of UNIX (or the corresponding underlying linear system in Windows) has to be stepped by 1 sec when such an adjustment to UTC is made, and so calculations across the jump are wrong.
But the Earth day is not exactly this, and we have always had the convention that mid-day (for 0 longitude, etc) is mean solar crossing, so *something* has to be fudged.
It has been proposed not to correct UTC to be 'right' in an astronomical sense to get round this, because of the accumulated stupidity of computers. But why? Fix the computer's clocks or just get over it. No big deal, eh?
Its a shame we haven't got a link to the original blog in case it says that. But I got to thinking and wondered if maybe they are using time stamps to control order of events in something like a loosely consistent database, and because of the sheer number of servers and events it really is critical to have microsecond accuracy of time... But this is sheer uninformed speculation.
It is an interesting point. Usually the two reasons are:
(1) You need to evaluate elapsed time across the discontinuity (or propagate a model's predictions) and can't deal with 1 sec error.
(2) Your code has time-wasting loops or other sequence-sensitive parts that are based on a clock update that is ASSUMED to be monotonic. This was one of the original reason for NTP adjusting time by rate-compensating to avoid backwards clock time steps.
In google's case it is more likely to be a database type issue, which begs the question why the did not use a GPS-like time base so it is accurate and consistent across such discontinuities.
Even then, time for DB order is questionable, why not just a translation counter? If you have lots of transactions per second and rely on this for order on a distributed system then the time delay/sync accuracy across the network will eventually be a limit on consistency.
Not that Google docs is consistent anyway... :(
"computers (or more precisely their clocks and calendar libraries) *assume* time is always 24*60*60 seconds per day"
... or more precisely their human masters. Computers deal just fine with second from epoch time, no need for "60 seconds per minute" thank you very much.
"We're still little collections of carbon molecules spinning around on some rock in the abyss. What can we do about that, Google? Eh? Thought so. ®"
I'm sure that at at some point we'll all be stored in some sort of iMind where your personality lives faster and is eternal - the fleshy bits will just be sent to make Soylent green for the Morlocks who tend the UPS arrays.
When that happens we won't need to worry about, or even be aware of things like sunrises or sunsets and we can have a proper universal time*
*where the universe is the size of the earth of course
If the Earth covered 150 million km in it's attempt to orbit the sun, we'd be burnt to a crisp.
So basically sounds like they use the unix adjtime command
int adjtime(const struct timeval *delta, struct timeval *olddelta);
The adjtime() function gradually adjusts the system clock (as returned by gettimeofday()). The amount of time by which the clock is to be adjusted is specified in the structure pointed to by delta.
Only potential "novelty" seems to be from the description that rather than starting to adjust the time from the point where adjtime is executed they instead specify an future time point at which the adjustment is to be completed.
Have to agree with deshepherd - those clever computer box things have been doing this for ...well ....a long time.
NTP clients have been doing this forever, and according to my OpenBSD man page, adjtime was added to BSD 4.3, which dates it around June 1986. So the technique is at least 25 years old!!
"but as computer systems have become more complex, having a rogue extra second can cause a lot of trouble."
No, more accurate version is:
"as we become more time-dependent on computers, the fact programmers did not understand (or chose to ignore) the official time standards for the last 30 years becomes more apparent"
There are plenty of ways of dealing with this if it matters, either by working with an atomic time scale (as GPS uses, they have a UTC-GPS offset that steps so the underlying time is linear, not that different from the UNIX time-zone implementation) or by coding and testing stuff so an occasional jump of +/-1 second is no big deal.
Google's work-around is a reasonable band-aid to this, and it would make sense for NTP to have this as an option, however, it also needs to report the proper time for others to sync to so you don't get lied to by their machines.
At least, no more lied to than usual...
Some of us, need sub-second precision. Making the second longer or shorter breaks things. I'd say we need to fix the root of the problem, stuff that's already broken: software dealing with time written by people that don't know the difference between UTC, GMT and UT1, let alone leap seconds (Excel date anyone?).
NTP is SUPPOSED to be a constant. If a leap second needs to be added then it should be added at a specific point in time not dicked around with during the day preceding the event. If Google's software can't cope with a leap second they should change their bloddy software.
Now while we're on the subject of time..... We should change over fully to decimal time (and decimal degrees too).
What's the point in having 60 seconds in an hour and 24 hours in a day? The answer is quite surprising.... there is no point at all, it's just a unit system used by the Sumerians (or possibly someone else) because they used Base 60 for currency, weights & time.
What's really annoying is that we have a seriously messy solution at the moment:-
We use 'Base 24' to measure a day, 'Base 60' to measure hours and minutes and 'Base 10' for centiseconds, milliseconds etc....
Before someone says 'but what about degrees & angles' well what bloddy idiot came up with that? The answer is no-one remembers!
rant over.... and google leave time the fuck alone.
Yes, proper option is for Google to fix their own software as it should be easy enough to cope with a 1 sec jump.
As for decimalising time, well Napoleon tried it and eventually gave up. The second is now fixed as an SI unit, so you have to choose from the prime factors of 86400 for your day's sub-divisions, and that limits what you can do.
Also what about the ~365.25 days per year? Ain't no way that can be rationalised base-10 while keeping a calendar that is in sync with the seasons.
We do know — you can blame it all on the Babylonians...
Why do we need to keep in sync?
Might be nice to have holidays that move around the seasons - then we can have a few years of skiing, then a few sunbathing....
I presume Farmers look at their fields and at the sky rather than the calendar when deciding what to do with their crops.
Who says we should have to put up with the SI second?
There is no good reason to sub divide a day into 86400 units. A mean sidereal day is actually ~86,164.09053 seconds long which is closer to a 'real' day.
What about the ~365.25 days per year? That's not even real unit of time (don't even get me started on leap years) it's just an approximation so people can have a 'birthday'.
The earth revolving around the sun is not a constant. Why should we forever have to change time to conform to something which isn't constant?
What happens when you're looking after a system that crosses timezones?
NTP only tells you the current UTC time.
However, there doesn't appear to be any protocol that can tell you the actual timezone you're currently in or announce a timezone change to a network.
So, how do you synchronise timezone changes on board a cruise liner?
A modern ship has a *lot* of systems on board that all need to know the current local time - there is at least one if not two in every cabin (Phone/alarm clock and TV), plus all the systems that run various aspects of the ship.
The current bastardisation that's being used for some IT systems are dual NTP servers - one server is issuing actual UTC, the other is issuing a 'UTC' that is actually the current local time.
Machines that don't need local time are tied to the 'real' NTP, machines that do need local time are tied to the 'fake' NTP.
It's hideous - it means that every timezone change takes approx. 2 hours to accomplish, during which the ship is a mess of some systems on one time and others on another. It also breaks a lot of systems because most clients do not expect time to go backwards by an hour on a regular basis, and some were designed around the assumption that time cannot ever go backwards by more than a leap-second.
And God help you if you need your alarm clock to go off between 2am and 4am.
If somebody can point me towards a proper solution I would be eternally grateful.
"Who says we should have to put up with the SI second?"
Well I would say just about everyone in the world with a clock or other time or frequency-keeping system marked or based on the SI unit.
(a) Keep the SI second, etc, and accept that things don't add up nicely so occasional corrections are needed, or
(b) Change the SI unit, break every time/frequency system and demand a re-write of all science and engineering text books to conform to the new system. Which STILL will not be correct as the Earth's rotation is randomly variable.
The argument is?
The short answer is use UNIX.
It keeps time internally as UTC (time_t variable, etc) and has a timezone value that can be changed as you see fit WITHOUT breaking anything, as all time calculations are based on UTC. Unless you are Apple and make an alarm clock feature that is...
Of course, for a moving system you need to know the timezone for your location. GPS gives time and location so it could be mapped to find the global zone you are in and thus update the TZ setting.
I don't know if all applications recognise TZ updates after starting, but I imagine the normal libraries will notice a system-wide change, so it might not be a complete answer out of the box.
Time keeping on DOS/Windows has been spectacularly crap, but now attempts to follow the same model. Except for some stupid cases where file systems keep local time (FAT32, some CIFS implementations, etc), or coders have not understood how to do it right, etc.
Using your logic we'd still be using pounds & ounces and shilling & farthings.
Why should we have to get rid of the 'second' as an SI unit? Even though its a non 'base 10' system I don't envision it being completely banned if we changed over to base 10 time (well not unless you live in the EU :-)
It's true it been tried before by the French, Henri Poincaré came up with the idea not Napoleon and he was a pretty darn good mathematician.
As with any new standard we'd have 'give it some some time' before it really took off, three years was nowhere near enough time for it to become accepted.
My standard day would equal 100,000 Deconds = 1,000 Dinutes = 10 Dours. Then we'd have decideconds, centideconds, millideconds & nanodecond's etc, etc.
The point is that current time uses multiple bases and is also randomly variable. I see no reason not to change over to decimal time (which is more readily calculable) even if leap seconds and leap years would still have to be added to make up for our lunar lunacy.
UTC is, as you are well aware, based upon international atomic time (TAI) which simply measures the unrelenting march of caesium fountain clock cycles aggregated from over two hundred atomic clocks distributed around the globe. The quaint affectation of sidereal time is disseminated as merely TAI with the appropriate offset applied.
Imsimil Berati Lahn,
Campaign for Real Time.
Sorry for miss-reading, but you are right there is no protocol I know of specifically intended to push out TZ changes. Normally the TZ rules are fixed for long-ish periods and they are pushed out as a set of values with OS patches (today I got some for my desktop covering Russia's DST rules, etc).
On an open source UNIX-like OS (Linux, BSD, Solaris, etc) that should be easy enough to implement something to replace the TZ rules with a dynamic set to centralise time for instant system-wide consistency (sort of "at 12:00 UTC change to TZ = +5 hours" computed from position & bearing sent in advance, so all devices roll at the same point). Even if the commercial justification is limited to your own business case.
Of course, you might have too much legacy stuff (specifically, outside of your control) to make that viable.
If you keep the SI second, which would be an issue for anything based on current time and frequency, then your definitions produce a 'day' that is not a cycle of light/dark by our Sun, which is how that was originally defined.
Similarly a 'year' is also based on the Earth's cyclic seasons.
For those of us who live on this planet, those are meaningful concepts. Of course, on another planets such as Mars, the Earth day is not so useful, but dealing with that issue is a LONG way off for humanity.
Science already has gone through the pains of CGS, and then MKS, unit revisions to make things more logical. You can bet the issue of time/date has been looked at by a lot of very smart minds and no compelling reason made to change when the benefits (simpler arithmetic) are weighed up against the costs of change.
Ah, Saturday Night Live w/ John Belushi, probably from the late '70's or early '80's. It sounded funny/stupid then. Today, I don't think I care. If the sun is overhead, the stores are probably open. If the sun isn't overhead, I should probably be sleeping.
"""If the sun isn't overhead, I should probably be sleeping."""
I would advise you not to travel too close to either of the poles with that theory : -)
You just lost your geek licence.
I can't remember the last time I had a schedule that was defined by the sun. My usual schedule is to sleep when I fall asleep and wake up later. Take today for example. I woke up at 7:30pm after having been awake for about 37 hours and going to bed at 8:45am this morning.
I don't schedule my 'day' around the sun, in fact, I rarely see the sun.
I said 'should', mostly because those users that I support are 8-5ers. I personally haven't worked 8-5 since 1985, however I do have to support those that do. This can be difficult.
The worst it got was 10 years ago when I had to support users in North America, Europe, and Asia...it was always 8am somewhere.
Maybe if Google aligned all the hard drives in their servers they could stabilize the Earth's rotation using the combined angular momentum so that leap seconds would be a thing of the past...
Of course Finance has been doing this since the 16th Century. Their solution was to say only 'return' (retrograde motion) mattered, and quadrature of the transcendental functions pushed all the leap days to the end of time. Since the actual doubling period for this 'interest' was far outside the lifetime of humans, bankers convinced us that value was fractional, interest was fractional, time was fractional, but currency was discrete. Pay up or we're taking your car. This was the original Leap-Year-Smear.
My advice, which unfortunately few will take, is to ignore Google's cute trick - smoothing Longitude - and to recognize the social damage done over the centuries by Latitude based exploitation of employees (Labor-For-Hire). It's clever to fool a Network to the tune of ~1000 milliseconds a year, but to fool by coercion generations of slaves is not so clever.
Here's how it works: "The Latitude Effect" http://tinyurl.com/white-nights-forever
If all yoru hardware has the same clock chips in then this will work well. But if they don't then wont all this over many cycles lead to drift between different hardware clocks! Though that said - given the life of hardware and timeframe involved I doubt this will ever get highlighted as an issue - ever.
Now one area they may have fun and games I would of thought would be the geo-location aspect as if they have servers all over the World then syncing time would in effect also fall foul to the same drift issues the GPS system has. Though as said they are for all effect inducing jitter to correct there problem, I wonder what other factors they take into account. For example some earthquakes casue effect upon the earths rotation and as such time differences.
Bottom line it's not a elegant solution to one of the many problems with time/dates and is more a hack in the true sence of the word 'hack'; but it is a elegant hack and anything that can use jitter to a positive effect has to be applauded. I can;t help wonder in a world were we convert lovely decimal numbers into BCD for computers to understand and process why we don't have a more elegant solution to the whole time/TZ/date issue that solves the issue at a lower level beyond every peace of log/database/etc software having to factor it in. Lets face it application logs tend to just say whats it is and tend not to have entries like "ew time changed here" so can be fun down the line in solving issues were logs say one thing, application does another and you think its a completely different time alltogether.
584.3million I think you'll find.
Would it not be cleaner to have the computers track TAI internally, and only convert to/from TAI for output/input.
NTP does not neatly jump your entire system at exactly the same instant. It gradually brings things into sync, and then holds them there. Leap seconds don't leap when you are using NTP: they bounce around and propagate in fractions.
The other problem is that leap seconds can go backwards. So new actions happen before old actions by the clock. Clock systems can be (and sometimes are) adjusted to prevent this, by never allowing clock syncronisation to take the time backward, just running slow until the real time catches up.
You can combine the solutions and the problems, and adjust your NTP servers to smear time. Or you let your NTP time bounce, and let your clients smear time.
Or you let your NTP time bounce, and let your clients bounce with it.
The real problem is the lack of conceptual distinction between the measurement of a human-meaningful date and an interval between two dates.
For instance, 1 Jan 1990 is a point in time, so is 1 Jan 2000, but the interval between the dates is not 3652 x 24 x 60 x 60 seconds
...when we have all the fun twice a year of clocks going backwards or forwards by an hour.
Keep a sense of proportion, chaps!
I don't want a sense of porportion. I would prefer that daylight savings time be eliminated. Thirty years ago when I was in school, DST prevented me from having to get on the schoolbus in the dark. Today, well...tomorrow, the first schoolbus will stop in my neighborhood at 6:20 - well before sunrise.
The only people, farmers, that this benefits, don't need it any more. If you tell a farmer that he can only work from 7am to 7pm, he'll ignore you. He'll work when he can because he has to. Now that harvesting machines have A/C, flood lights, satellite radio and navigation, and adjustable seats, I really don't see the need for DST.
Biting the hand that feeds IT © 1998–2017