On 4/8/2015 4:52 PM, Paul_Koning@dell.com wrote:
Yes, my question was quite a muddle. Fortunately, Dr. Allen decrypted it well enough to send me the information I was looking for. Now that I know the answer, I can restate the question so it actually makes sense.
I have a Unix system that keeps time internally by time_t, POSIX style, i.e., time is seconds since the epoch not counting leap seconds. And the NTP protocol encodes time as seconds since an epoch (a different one).
My Unix box is synchronized by NTP. So how does it manage to keep the exact UTC time that references such as WWV show, even though as a POSIX system it doesn’t count leap seconds?
The trick here is that the while the NTP protocol doesn't include leap seconds, and neither does the time_t structure, the NTP daemon does know about leap seconds, and will either slew the clock, or run back the clock at the appropriate time to keep things running in line. Thus, if you had some way of rewinding time right now, and chose to rewind time by the exact number of seconds returned by time_t, you'd arrive not at 00:00:00 UTC, 1 January 1970, but instead at 00:00:26 UTC, 1 January 1970, since there have been 26 leap seconds inserted between now and then. Because leap seconds are intercalary, time math using time_t makes sense to humans because we don't notice the difference. We generally due time math from the "one year ago", or "days since September 11th, 2001", which all work well with the time_t construction. The only time where things get out of whack is when you are looking at time intervals that cross a leap seconds where accuracy to the second level matters. Note however, that time_t does properly handle February 29th, because that is a large enough difference that it does matter, and every program that needs to measure time intervals in days over years has to correct for it. (Sometimes correctly, and like we'll see in 2100, sometimes not. We got lucky in 2000.) --Ted