Gentlepeople, I’ve been sorting to a flurry of questions about leap seconds from colleagues (triggered by the recent announcement of an upcoming leap second). That sent me to the Theory file, and some experiments. Those experiments leave me puzzled. I apologize for going a bit off topic for this list, but the expertise is clearly here. The Theory file says that POSIX requires leap seconds to be ignored. And indeed, if I set my system timezone to a POSIX zone description and ask it to convert a time value that’s an integer multiple of 86400, I end up at an exact hour (or half hour) multiple, for example exactly midnight if UTC. And similarly, if I set my zone to a “right” one and do the conversion, I get a time that’s a few seconds shy of the exact multiple, as expected. I can also see that my default timezone definitions on my various Unix machines are POSIX ones, again as expected. So here is the puzzle. I would expect WWV, and www.time.gov, to reflect leap seconds. So why would they give me a time that matches, to the second, the POSIX time on my workstation? Does NTP send POSIX seconds since epoch rather than real ones? paul
On Wed 2015-04-08T19:02:07 +0000, Paul_Koning@dell.com hath writ:
So here is the puzzle. I would expect WWV, and www.time.gov, to reflect leap seconds. So why would they give me a time that matches, to the second, the POSIX time on my workstation? Does NTP send POSIX seconds since epoch rather than real ones?
NTP effectively sends "mean solar seconds elapsed since epoch", which is another way of saying "elapsed seconds not counting leap seconds" as is demanded by POSIX. The value of the time in WWV and www.time.gov is conforming to ITU-R TF.460 where the leap seconds in the radio broadcasts were designated as 23:59:60 in order that the value of SI second markers of UTC closely matches the value of mean solar second markers of UT1. In the radio broadcasts those leap seconds are truly intercalary. There is no way for the value of the time in WWV to indicate the number of times that an intercalary second has been inserted since 1972, nor when they were inserted. Before 1972 there was no way for the value of the time in WWV to indicate the number of leaps of milliseconds, nor the number of times that the duration of the broadcast seconds had been changed, nor the current duration of the broadcast seconds. -- Steve Allen <sla@ucolick.org> WGS-84 (GPS) UCO/Lick Observatory--ISB Natural Sciences II, Room 165 Lat +36.99855 1156 High Street Voice: +1 831 459 3046 Lng -122.06015 Santa Cruz, CA 95064 http://www.ucolick.org/~sla/ Hgt +250 m
On Apr 8, 2015, at 3:17 PM, Steve Allen <sla@ucolick.org> wrote:
On Wed 2015-04-08T19:02:07 +0000, Paul_Koning@dell.com hath writ:
So here is the puzzle. I would expect WWV, and www.time.gov, to reflect leap seconds. So why would they give me a time that matches, to the second, the POSIX time on my workstation? Does NTP send POSIX seconds since epoch rather than real ones?
NTP effectively sends "mean solar seconds elapsed since epoch", which is another way of saying "elapsed seconds not counting leap seconds" as is demanded by POSIX.
Thanks much! That sure is not clear from the RFC. So that explains things: it means that a POSIX system that is an NTP client will track UTC, leap seconds and all, except for a short while just after a leap second occurrence because at that point the NTP client machinery will be adjusting to the one second phase shift. paul
On Wed, Apr 8, 2015, at 15:02, Paul_Koning@dell.com wrote:
The Theory file says that POSIX requires leap seconds to be ignored. And indeed, if I set my system timezone to a POSIX zone description and ask it to convert a time value that’s an integer multiple of 86400, I end up at an exact hour (or half hour) multiple, for example exactly midnight if UTC. And similarly, if I set my zone to a “right” one and do the conversion, I get a time that’s a few seconds shy of the exact multiple, as expected.
I can also see that my default timezone definitions on my various Unix machines are POSIX ones, again as expected.
So here is the puzzle. I would expect WWV, and www.time.gov, to reflect leap seconds.
What exactly do you think is meant by "reflect" or "ignore" leap seconds? 2015-04-08 00:00:00 UTC is the same real moment in time whether you use leap seconds or not. The only difference is in what the integer value of time_t corresponding to that moment is (1428451200 or 1428451225). There isn't a separate "no leap second" calendar that stays some number of seconds behind (well, there are in fact several, notably TAI and GPS, but there's no reason to expect WWV or time.gov to use these to report time, nor should anything about POSIX "ignoring leap seconds" be construed to mean POSIX systems use such a system.) When people say POSIX ignores leap seconds, it means that time_t is a multiple of 86400 at midnight UTC, and that there's no way to identify what the actual UTC time is for a time_t which is taken around a leap second. It increments 86400 times in 86401 seconds [some systems may hold the same value for two seconds with or without various tricks done with fractional second reporting either before or during the multiple of 86400, others may have this hour or day composed of "seconds" that are longer than an SI second.]
So why would they give me a time that matches, to the second, the POSIX time on my workstation? Does NTP send POSIX seconds since epoch rather than real ones?
NTP has its own encoding for time values, and its own epoch, and does not use the POSIX format over the wire. WWV has its own encoding for time which includes broken-down hour/minute/second fields. www.time.gov, of course, displays the time in a human-readable format with broken-down fields (it is, AIUI, capable of displaying 60 in the seconds field). Everything has been displayed to you as UTC. The only difference is that POSIX is not capable of representing a UTC time whose seconds value is 60, and defines a linear scale where the difference between each successive day is exactly 86400.
On Wed, Apr 8, 2015, at 15:46, Paul_Koning@dell.com wrote:
Thanks much! That sure is not clear from the RFC. So that explains things: it means that a POSIX system that is an NTP client will track UTC, leap seconds and all, except for a short while just after a leap second occurrence because at that point the NTP client machinery will be adjusting to the one second phase shift.
NTP is able to give the client advance warning of an upcoming leap second, and the client ntpd can do various more sophisticated stuff to try to "smooth out" the leap second, since having the clock stop for a second - or worse, for sub-second timestamps to go backwards by a second - is undesirable. More undesirable than having the clock be deliberately inaccurate (by less than a second) for an extended period. POSIX itself does not comment on what should happen to the clock a leap second, nor does it mandate any particular level of accuracy for the system clock, nor does it mandate that the system real-time clock should be monotonic. But, yeah, a POSIX system will never report a time with a second outside the range of 00-59, and difftime will always return a multiple of 60 for timestamps that are a whole number of minutes apart (and these values will be the same mod 60 as the seconds component of their broken down UTC time, etc).
On Wed, 08 Apr 2015, Paul_Koning@dell.com wrote:
So here is the puzzle. I would expect WWV, and www.time.gov, to reflect leap seconds. So why would they give me a time that matches, to the second, the POSIX time on my workstation? Does NTP send POSIX seconds since epoch rather than real ones?
If you represent time as an number of seconds since some epoch, then whether or not leap seconds are counted makes a difference to the integer part of the value. If you express time as year/month/day/hour/minute/second, then (provided you follow UTC), observers can't really tell whether or not leap seconds are counted, unless they watch really closely around the time of a leap second transition. --apb (Alan Barrett)
If you don't run ntpd with the "-x" flag, or if you have a broken version of ntpd that does not properly honour the "-x" flag, then ntpd will tell the kernel (in advance) that a leap second will occur at midnight UTC time. It is then up to the kernel to deal with the leap second. Different kernels may behave differently. My testing on recently patched versions of RHEL_6 and Solaris_10 shows that the clock does go backwards. This is the output of a script that is set to spit out the time every 500ms: Jun 30 2015 23:59:57.398 Jun 30 2015 23:59:57.898 Jun 30 2015 23:59:58.398 Jun 30 2015 23:59:58.898 Jun 30 2015 23:59:59.398 Jun 30 2015 23:59:59.898 Jun 30 2015 23:59:59.399 <-clock jumps backwards here Jun 30 2015 23:59:59.899 Jul 01 2015 00:00:00.399 Jul 01 2015 00:00:00.899 Jul 01 2015 00:00:01.399 Jul 01 2015 00:00:01.899 Jul 01 2015 00:00:02.399 Jul 01 2015 00:00:02.899 If you run a non-broken version of ntpd with the "-x" flag, then ntpd will not warn the kernel of the upcoming leap second. In this case, nptd will slew the clock to make up for the leap second. Maximum slew rate is 500ppm which means it will take a minimum of 33 minutes before the clock will be back in sync with UTC time. -chris
NTP is able to give the client advance warning of an upcoming leap second, and the client ntpd can do various more sophisticated stuff to try to "smooth out" the leap second, since having the clock stop for a second - or worse, for sub-second timestamps to go backwards by a second - is undesirable. More undesirable than having the clock be deliberately inaccurate (by less than a second) for an extended period.
On Apr 8, 2015, at 4:55 PM, <random832@fastmail.us> <random832@fastmail.us> wrote:
On Wed, Apr 8, 2015, at 15:02, Paul_Koning@dell.com wrote:
The Theory file says that POSIX requires leap seconds to be ignored. And indeed, if I set my system timezone to a POSIX zone description and ask it to convert a time value that’s an integer multiple of 86400, I end up at an exact hour (or half hour) multiple, for example exactly midnight if UTC. And similarly, if I set my zone to a “right” one and do the conversion, I get a time that’s a few seconds shy of the exact multiple, as expected.
I can also see that my default timezone definitions on my various Unix machines are POSIX ones, again as expected.
So here is the puzzle. I would expect WWV, and www.time.gov, to reflect leap seconds.
What exactly do you think is meant by "reflect" or "ignore" leap seconds?
2015-04-08 00:00:00 UTC is the same real moment in time whether you use leap seconds or not. The only difference is in what the integer value of time_t corresponding to that moment is (1428451200 or 1428451225). There isn't a separate "no leap second" calendar that stays some number of seconds behind (well, there are in fact several, notably TAI and GPS, but there's no reason to expect WWV or time.gov to use these to report time, nor should anything about POSIX "ignoring leap seconds" be construed to mean POSIX systems use such a system.)
Yes, my question was quite a muddle. Fortunately, Dr. Allen decrypted it well enough to send me the information I was looking for. Now that I know the answer, I can restate the question so it actually makes sense. I have a Unix system that keeps time internally by time_t, POSIX style, i.e., time is seconds since the epoch not counting leap seconds. And the NTP protocol encodes time as seconds since an epoch (a different one). My Unix box is synchronized by NTP. So how does it manage to keep the exact UTC time that references such as WWV show, even though as a POSIX system it doesn’t count leap seconds? The unstated assumption here is that NTP encodes time as elapsed seconds from its epoch including leap seconds. And while RFC 5905 is not explicit, it has enough hints to make you believe this is the case. But in fact that is not so; NTP, at least as used in practice, encodes time as seconds from its epoch NOT counting leap seconds. So, in essence, it sends the POSIX time_t value plus 2,208,988,800 (since it starts in 1900 not 1970). And that is why my workstation clock is correct. paul
On 4/8/2015 4:52 PM, Paul_Koning@dell.com wrote:
Yes, my question was quite a muddle. Fortunately, Dr. Allen decrypted it well enough to send me the information I was looking for. Now that I know the answer, I can restate the question so it actually makes sense.
I have a Unix system that keeps time internally by time_t, POSIX style, i.e., time is seconds since the epoch not counting leap seconds. And the NTP protocol encodes time as seconds since an epoch (a different one).
My Unix box is synchronized by NTP. So how does it manage to keep the exact UTC time that references such as WWV show, even though as a POSIX system it doesn’t count leap seconds?
The trick here is that the while the NTP protocol doesn't include leap seconds, and neither does the time_t structure, the NTP daemon does know about leap seconds, and will either slew the clock, or run back the clock at the appropriate time to keep things running in line. Thus, if you had some way of rewinding time right now, and chose to rewind time by the exact number of seconds returned by time_t, you'd arrive not at 00:00:00 UTC, 1 January 1970, but instead at 00:00:26 UTC, 1 January 1970, since there have been 26 leap seconds inserted between now and then. Because leap seconds are intercalary, time math using time_t makes sense to humans because we don't notice the difference. We generally due time math from the "one year ago", or "days since September 11th, 2001", which all work well with the time_t construction. The only time where things get out of whack is when you are looking at time intervals that cross a leap seconds where accuracy to the second level matters. Note however, that time_t does properly handle February 29th, because that is a large enough difference that it does matter, and every program that needs to measure time intervals in days over years has to correct for it. (Sometimes correctly, and like we'll see in 2100, sometimes not. We got lucky in 2000.) --Ted
Ted Cabeen said:
Note however, that time_t does properly handle February 29th, because that is a large enough difference that it does matter, and every program that needs to measure time intervals in days over years has to correct for it. (Sometimes correctly, and like we'll see in 2100, sometimes not. We got lucky in 2000.)
I seem to recall some software thought it wasn't a leap year. -- Clive D.W. Feather | If you lie to the compiler, Email: clive@davros.org | it will get its revenge. Web: http://www.davros.org | - Henry Spencer Mobile: +44 7973 377646
On 09/04/15 01:18, Ted Cabeen wrote:
Because leap seconds are intercalary, time math using time_t makes sense to humans because we don't notice the difference. We generally due time math from the "one year ago", or "days since September 11th, 2001", which all work well with the time_t construction. The only time where things get out of whack is when you are looking at time intervals that cross a leap seconds where accuracy to the second level matters.
The 'puzzle' is perhaps why the base IS seconds? ;) But as others have pointed out it is only important for some calculations. Timestamp data for all of the databases I use work with a day base and return time as a fraction of a day. This is a much more practical base for genealogical data than 'seconds' for many reasons and I still feel that any overhaul of the time_t libraries would be better based on this, if only for its much cleaner handling of 32/64bit device interworking problems. -- Lester Caine - G8HFL ----------------------------- Contact - http://lsces.co.uk/wiki/?page=contact L.S.Caine Electronic Services - http://lsces.co.uk EnquirySolve - http://enquirysolve.com/ Model Engineers Digital Workshop - http://medw.co.uk Rainbow Digital Media - http://rainbowdigitalmedia.co.uk
On 08/04/15 20:46, Paul_Koning@dell.com wrote:
On Apr 8, 2015, at 3:17 PM, Steve Allen <sla@ucolick.org> wrote:
On Wed 2015-04-08T19:02:07 +0000, Paul_Koning@dell.com hath writ:
So here is the puzzle. I would expect WWV, and www.time.gov, to reflect leap seconds. So why would they give me a time that matches, to the second, the POSIX time on my workstation? Does NTP send POSIX seconds since epoch rather than real ones?
NTP effectively sends "mean solar seconds elapsed since epoch", which is another way of saying "elapsed seconds not counting leap seconds" as is demanded by POSIX. Thanks much! That sure is not clear from the RFC. So that explains things: it means that a POSIX system that is an NTP client will track UTC, leap seconds and all, except for a short while just after a leap second occurrence because at that point the NTP client machinery will be adjusting to the one second phase shift.
I looked into this in some depth for Linux because the ntpd -x flag wasn't behaving as expected. To get ntpd -x out of the way first: that stops ntpd from using the kernel time discipline and it looks after time adjustments itself as well as slewing any adjustment that would otherwise be a step. Once we (at least two of us, independently) had fixed ntpd, leap seconds were slewed as well. By slewing leapseconds you're trading accurate timestamps for a monotonic clock -- the moral of which is don't use gettimeofday() as an interval timer. With that out of the way, it was interesting to see how time is handled within the Linux kernel time discipline. On the day of the leap second, ntpd sets a leap indicator to sat that there will be a leap second inserted at midnight UTC (lunch time in New Zealand, fortunately I'm in the UK). During this time, adjtimex(2) return TIME_INS as its status. At midnight, two things happen: the realtime clock steps from 1435708800 to 1435708799 and adjtimex() starts returning TIME_OOM as its status. When time comes round to 1435708800 again, adjtimex() starts returning TIME_OK as its status. This is very useful because it means that at 1435708799.5 you can unambiguously determine whether it's 23:59:59.5 or 23:59:60.5. Of course, after the event a value in the interval [1435708799, 1435708800) is ambiguous and Posix tacitly admits this. (What Posix actually says is that the time in seconds returned by various system calls is approximately the number of seconds since midnight at the beginning of 1970. This is a problem. The timestamp data type in a well-known RDBMS is a stored as a Posix timestamp and a timezone indicator. The timezone indicator is useful for expressing things like "same time tomorrow" but it doesn't help at all with leap seconds. There is no way to express 23:39:60. (This is one of the very few things I know about that database, in case you were wondering.) The problem is made more serious by the fact that 1435708799 occurs on both Tuesday and Wednesday. If you want to store accurate timestamps, you can't use the result of getttimeofday(). Dealing with whole hours is much simpler :) jch
On 2015-04-09 02:24, Lester Caine wrote:
On 09/04/15 01:18, Ted Cabeen wrote:
Because leap seconds are intercalary, time math using time_t makes sense to humans because we don't notice the difference. We generally due time math from the "one year ago", or "days since September 11th, 2001", which all work well with the time_t construction. The only time where things get out of whack is when you are looking at time intervals that cross a leap seconds where accuracy to the second level matters.
The 'puzzle' is perhaps why the base IS seconds? ;)
Systems then used power line clocks with 50/60Hz interrupts, so the common base was the second, which was good enough for file times; leap seconds did not start until 1972; OSes and apps used interrupt jiffies for interval timing. Even in 1980, PC DOS floppy file times were considered good enough with two second resolution, and standardized 60Hz jiffies. The first databases only provided times to the second; these were later extended to ms, then us, now ns, soon ps.
But as others have pointed out it is only important for some calculations. Timestamp data for all of the databases I use work with a day base and return time as a fraction of a day. This is a much more practical base for genealogical data than 'seconds' for many reasons and I still feel that any overhaul of the time_t libraries would be better based on this, if only for its much cleaner handling of 32/64bit device interworking problems.
Oracle server has always used date times with bytes for century, year, month, day, hour, minute, second, each with various offsets to avoid problems on networks which did not support eight bit transparency, always supporting minimum value JD 0 -- 1 Jan 4713 BC, with upper limits varying across versions. -- Take care. Thanks, Brian Inglis
participants (10)
-
Alan Barrett -
Brian Inglis -
Chris Walton -
Clive D.W. Feather -
John Haxby -
Lester Caine -
Paul_Koning@dell.com -
random832@fastmail.us -
Steve Allen -
Ted Cabeen