
1. I have code that synchronizes a system to TAI-10 using an NTP=UTC source. See http://pobox.com/~djb/clockspeed.html. 2. My specific proposal for /etc/leapsecs.dat is a list of 8-byte leap seconds in TAI64 format. See http://pobox.com/~djb/proto/tai64.txt. For sample code, see leapsecs* in the clockspeed package. 3. The fact that some operating systems ship with crippled leap-second support doesn't change the need for easy leap-second updates on other systems. The fact that a few operating systems make it easy to run zic doesn't change the need for binary-only installations on other systems. 4. I don't see why anyone should waste time or energy on unused ideas such as negative leap seconds or rolling leap seconds. There's ample time to add these features if they are ever necessary. Arguments about ``getting the code right'' are silly: the code will have to be upgraded anyway in a few thousand years to support some post-Gregorian calendar. ---Dan

"D. J. Bernstein" wrote on 1998-05-29 01:48 UTC:
On most systems, using TAI is quite problematic, because most time services publish exclusively UTC and not TAI (GPS being the notable exception). This means, a TAI clock is doomed to go wrong without periodic manual intervention (leaps seconds can be missed during downtimes). Therefore, I favour UTC as a timescale in computer applications. There are two possible ways of representing leap seconds in a UTC second count: (a) if you have a POSIX.1b style 32-bit nanosecond register that indicates the nanoseconds that have passed since the start of the last second, then just keep the second counter at 23:59:59 during the leap second and run with the nanosecond counter from 1_000_000_000 up to 1_999_999_999. or (b) define your integer time scale such that every day has 24*3600+1 seconds, i.e. you reserve a code for a potential leap second at the end of every day (or month). I prefer (a). It keeps us within the normally used UTC timebase and doesn't make things unnecessarily complicated on non-leapsecond aware systems. If you check out comp.std.unix, you'll see my posting about introducing in POSIX a CLOCK_UTC clock that is only present if the kernel has had a recent time service update and that doubles the nanosecond range during leap seconds. TAI and a correct difftime is practically never needed on normal computer applications. A correct difftime implementation on POSIX systems where the time_t scale explicitely does not provide codes for leap seconds is a joke of inconsistency. Leap seconds are a concern in distributed systems, where strictly monotonic precision timestamps are necessary (e.g., banking databases), but here the two gigananoseconds (2 Gns) approach (a) works nicely. TAI is only of concern in very special purpose systems such as navigation and astronomical/geological observations. It's nice to have TAI available, but it should not be used as the primary timescale. It is just an auxiliary timescale that happens to be accessable on systems with a builtin GPS receiver or with some future TAI enhanced NTP version. My preferred timestamp format would be the UTC96/2000 format, i.e. a signed Bigendian 64-bit second counter starting with 0 at 2000-01-01 00:00:00Z followed by a 32-bit Bigendian nanosecond counter. Alternative nice epoch start dates could be the year 0 (also known as 1 B.C.) of the Gregorian (!) calendar, or the year 1875 (when the Metric Convention was signed in Paris and the Gregorian Calendar was already widely implemented). 0 simplifies implementation of conversion routines slightly but might cause historic confusion. Attosecond timescales are practically useless for the forseeable future. The best Cesium clocks in the world (currently the CS1 and CS2 operated by PTB in Braunschweig) do not reach 1 ns precision, nor does USNO's clock-array regulated hydrogen maser in Washington DC. Electric impulses in wires travel only around 20 cm per nanosecond, and relativistic effects really start to make things confusing you if you want to build clocks with more than 1 ns precision. So your TIA64NA sounds to me much like an overkill specification. Markus -- Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK email: mkuhn at acm.org, home page: <http://www.cl.cam.ac.uk/~mgk25/>

(See http://pobox.com/~djb/proto/utctai.html for a quick introduction to leap-second issues.) Markus Kuhn writes:
This means, a TAI clock is doomed to go wrong without periodic manual intervention
You have the situation precisely backwards. For computers without any outside input, ticking UTC is impossible by definition. Ticking TAI requires nothing more than an internal clock. For computers with outside input, the inherent cost of an occasional leap-second-table update is miniscule. Leap seconds are announced several months in advance.
Therefore, I favour UTC as a timescale in computer applications.
You have been outvoted by thousands of programmers who subtract UNIX times to compute real-time differences.
TAI is only of concern in very special purpose systems such as navigation and astronomical/geological observations.
Nonsense. One of the most basic code optimization techniques is to try several code alternatives on an unloaded system and time each one to see what's fastest. Many packages do this automatically during installation. What happens if someone installs such a package during a leap second? Saying ``well, they should use RDTSC or gethrtime() or CLOCK_RIGHT'' is missing the point. They _don't_. Telling all of them to change, for the sake of a minor simplification in xntpd, is poor engineering.
Attosecond timescales are practically useless for the forseeable future.
Nanosecond timescales are woefully inadequate for certain applications. Anyway, you should learn to read more carefully; /etc/leapsecs.dat uses TAI64, which is an 8-byte scale providing 1-second precision. ---Dan

"D. J. Bernstein" wrote on 1998-05-30 19:44 UTC:
Markus Kuhn writes:
This means, a TAI clock is doomed to go wrong without periodic manual intervention
You have the situation precisely backwards.
For computers without any outside input, ticking UTC is impossible by definition.
Yes, but your next sentence shows that you misunderstood the reason why:
Ticking TAI requires nothing more than an internal clock.
No. The speed with which UTC and TAI drift apart is two orders of magnitude smaller than the frequency error of the majority of computer clocks out there. Your "computers without any outside input" are after a couple of months *far* away from both TAI and UTC. These computers are therefore not of any concern here, because their operators obviously are not concerned about the accuracy of their time. Ok, so now we are only talking about computers *with* automatic outside time input here. These usually receive UTC today, because UTC is as specified in the various ITU-R TF.* recommendations the time scale used for international time and frequency broadcasting signals. UTC (but not TAI) is broadcasted by WWV, DCF77, DVB-SI, DAB, various teletext carriers, NTP, and many more. Only navigation systems such as Omega and GPS provide you with TAI, in the case of GPS, both scales are provided.
Leap seconds are announced several months in advance.
Yes. On a circular letter and on a web page by IERS and USNO. Feeding this information into computers then requires manual intervention unless we establish some leap second history update protocol for all computers on this planet. I can't believe that in the forseeable future more than a small minority of installed systems will get this information in time. Therefore, any practically usable timescale must today be derived from UTC and not from TAI. This is what NTP does and this is what POSIX/Unix does for very good reasons.
Therefore, I favour UTC as a timescale in computer applications.
You have been outvoted by thousands of programmers who subtract UNIX times to compute real-time differences.
You are either mixing up concepts here completely, or you are quite inexperienced in timing issues. In order to understand what "thousands on Unix programmers" are doing, you should have a look at section 2.2.2.113 of POSIX.1 (ISO/IEC 9945-1:1996 or ANSI/IEEE Std 1003.1-1996): The POSIX time_t scale is a count of seconds since the epoch, where the term "seconds since the epoch" is strictly defined in a way that confuses people who haven't read the spec carefully: We do not mean the real number of seconds, but the seconds without counting inserted leap seconds. time_t is a UTC scale encoding according to the following algorithm: Let a struct tm contain what a UTC clock displays. Then the corresponding time_t value is determined by tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 + (tm_year-70)*31_536_000 + ((tm_year - 69)/4)*86400 This algorithm allows to convert time_t into UTC YYYY-MM-DD hh:mm:ss without any additional information (leap second table), therefore time_t is equivalent to a UTC time and certainly not to a TAI time. To convert time_t into TAI, you need a leap second table, which practically no system on this planet has (systems operated by members of the tz mailing list excluded of course ;-).
TAI is only of concern in very special purpose systems such as navigation and astronomical/geological observations.
Nonsense. One of the most basic code optimization techniques is to try several code alternatives on an unloaded system and time each one to see what's fastest. Many packages do this automatically during installation. What happens if someone installs such a package during a leap second?
Ok, slowly I understand fully your confusion. You should more carefully differentiate between TAI and "some monotonic leap-second free second count". These two are by no means the same. A monotonic second count is trivial to implement. TAI is difficult to implement, because to get TAI from the easily available UTC time, you need an up-to-date leapsecond table. I guess, what you really want is something like CLOCK_MONOTONIC as specified in one of the more recent POSIX drafts (see current discussion in comp.std.unix) and *not* a TAI clock. These are two very different functions. POSIX's CLOCK_MONOTONIC (previously called CLOCK_FUNDAMENTAL) is guaranteed to have no jumps and to be available right after system startup. CLOCK_TAI is guaranteed to represent TAI within some reasonable absolute accuracy (say 100 ms).
Saying ``well, they should use RDTSC or gethrtime() or CLOCK_RIGHT'' is missing the point. They _don't_.
Bad training of software engineers leads to bad products. So what?
Telling all of them to change, for the sake of a minor simplification in xntpd, is poor engineering.
Who says that the gettimeofday() or the CLOCK_REALTIME clock should follow precisely UTC? I fully favour to add to POSIX a CLOCK_UTC that represents leap seconds by counting in tv_nsec from 1_000_000_000 to 1_999_999_999 while keeping the code for 23:59:59Z in tv_sec. CLOCK_MONOTONIC will just have a value that is one higher than the value it had a second ago and nobody guarantees *anything* else about CLOCK_MONOTONIC. It can typically be a second counter since the last boot. It has nothing to do with TAI. For CLOCK_MONOTONIC (and the equivalent gettimeofday) we have unfortunatelly not any definition of what their value near a leap second should be. A reasonable hack is for instance what electricity companies do after leap seconds. They reduce the frequency from (say) 60 Hz to 59 Hz for one minute, and this way, all UTC clocks that get their reference frequency from the power network have followed smoothly with their phase the UTC timescale. Many Unix systems have an adjtime() call that performs phase adjustments to the kernel clock smoothly by reducing or increasing the clock's reference frequency by 1% until the phases match again. If you reduce the allowed skew to 500 pmm (a reasonable upper limit for the worst crystal you will find in not completely broken computer clock circuits), then you will need 2000 seconds to get synchronization with UTC back.
Attosecond timescales are practically useless for the forseeable future.
Nanosecond timescales are woefully inadequate for certain applications.
They are a nice lower limit for a range of useful resolutions and they are convenient to implement on 32-bit machines.
Anyway, you should learn to read more carefully; /etc/leapsecs.dat uses TAI64, which is an 8-byte scale providing 1-second precision.
You should learn to read more carefully: My posting was carefully written not to contain any references to /etc/leapsecs.dat. I was generally discussing your Web page that as I understood it propagates to use TAI as a generally preferable integer timestamping scale (such as time_t), which I am convinced is a fatally bad engineering decision (except in atomic clock driven navigation systems). May be we are in violent agreement and are actually arguing in the same direction and you just used TAI as a bad expression for "rate monotonic clock", but if your vision really is to implement a kernel clock in a way that allows to convert to/from TAI (or GPS or ET) without a leap second table, then I feel that this is not a good design decision for the reasons pointed out above. Markus -- Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK email: mkuhn at acm.org, home page: <http://www.cl.cam.ac.uk/~mgk25/>

I'm interested in what works, not in religious arguments. There's a huge amount of code that subtracts UNIX times to compute real-time differences. There's a much, much, much smaller amount of code that converts UNIX times to local times. I want accurate local-time displays. I need accurate real-time differences. I solve both problems by setting my UNIX clock to TAI-10: * http://pobox.com/~djb/clockspeed.html converts NTP to TAI-10; * the tz library converts TAI-10 to right/US/Central local time. Unlike Kuhn's pie-in-the-sky suggestions, this all works _right now_. I don't have to convince thousands of programmers to abandon ``t1-t0''. Of course, when a new leap second is announced, I have to regenerate several hundred zone files, using source that wasn't shipped with my OS. I can deal with this. Most users can't. What I'm suggesting is that tz read leap-second data from a single file, such as /etc/leapsecs.dat. Markus Kuhn writes:
Your "computers without any outside input" are after a couple of months *far* away from both TAI and UTC.
Actually, with most clocks, it's easy to keep the error below 1 second for the entire lifetime of the computer. The real issue is instability, not inaccuracy; any serious clock-handling program will compensate for inaccuracy given two time measurements.
These computers are therefore not of any concern here,
False. Accurate time differences are often crucial whether or not the local-time display is accurate.
Feeding this information into computers then requires manual intervention unless we establish some leap second history update protocol for all computers on this planet.
False. A new protocol is not necessary, since NTP is able to transmit leap-second warnings. (However, a new protocol would be a good idea for several obvious reasons.)
You have been outvoted by thousands of programmers who subtract UNIX times to compute real-time differences. You are either mixing up concepts here completely, or you are quite inexperienced in timing issues. [ ... ] tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 + (tm_year-70)*31_536_000 + ((tm_year - 69)/4)*86400
Don't be an idiot. 2100 is not going to be a leap year. Anyway, I'm perfectly aware that POSIX has documented the half-assed behavior of some obsolete vendor libraries. This has no relevance to the current discussion; I'm not using those libraries.
To convert time_t into TAI, you need a leap second table, which practically no system on this planet has (systems operated by members of the tz mailing list excluded of course ;-).
False. Several vendors ship the tz library with the right time zones. ---Dan

"D. J. Bernstein" wrote on 1998-05-31 03:44 UTC:
I'm interested in what works, not in religious arguments.
Same here! 8)
There's a huge amount of code that subtracts UNIX times to compute real-time differences. There's a much, much, much smaller amount of code that converts UNIX times to local times.
I am not sure this is really true: The code that subtracts time usually semantically does things like t = t + "one day". If one day is represented by (time_t)86400, then with the POSIX definition of time_t, what you get is the same time on the next day, which is what users usually expect, no matter whether there was a leap second in between or not.
Your "computers without any outside input" are after a couple of months *far* away from both TAI and UTC.
Actually, with most clocks, it's easy to keep the error below 1 second for the entire lifetime of the computer.
In my experience, this works only for computers in well air-conditioned rooms that are rarely switched off. Opening a window in winter near my PC causes the clock frequency to change 20 ppm due to the temperature drop, i.e. your 1 s error has accumulated for a crystal calibrated during a hot summer day in less than a day. I could even see the server room's air conditioning duty cycle on the NTP log files of the stratum-1 server at the University of Erlangen. For 1 sec error over a computer's lifetime (say 5 years), you should better invest in a small oven that keeps the crystal temperature controlled on one of the extrema of the crystal's frequency versus temperature curve.
False. Accurate time differences are often crucial whether or not the local-time display is accurate.
Accuracy of local time display is probably a cultural thing. When I was in the U.S., I rarely saw anywhere a clock with accurate time display. Most were at least 3-8 minutes wrong. The media doesn't broadcast any time signals, and if, then they are often over a minute wrong (as seen several times at CNN HN). The only sources of accurate time I have seen in the U.S. were NTP computers, GPS receivers and my shortwave radio when tuned to WWV. In Europe, is is customary that most radio and TV stations send beeps as a precise hour marker at the beginning of the news and people set their watches accordingly. In Central Europe, modern low-cost radio alarm clocks now typically contain a DCF77 receiver since the hardware costs of such a time receiver are just around 10 USD. Railway stations also have radio clocks with the precise time, as do many church bell towers. Since local time with subsecond accuracy is available so widely, users also like to see their computers to be as accurate as BBC and their radio clock with regard to its time display. In the U.S. on the other hand, GPS servers that displayed local time with the GPS-UTC offset added were sold for quite some time without anybody even noticing that the displayed time was off by several seconds. Please be aware that your personal opinion about the desirability of accurate local time display might be seriously regionally biased. In Europe, many people will immediately call support if their supposed to be synchronized clocks are five seconds wrong compared to the radio news beeps.
Feeding this information into computers then requires manual intervention unless we establish some leap second history update protocol for all computers on this planet.
False. A new protocol is not necessary, since NTP is able to transmit leap-second warnings. (However, a new protocol would be a good idea for several obvious reasons.)
Last time I looked at NTP, it only announced the next leap second and did not say what the current TAI-UTC difference is. So NTP does not provide the information to convert reliably between UTC and TAI. I guess, this can and should be changed of course in the next NTP revision. Once this is done, you will be able to select between at least four clocks formats on a NTP host: CLOCK_UTC shows UTC with leap seconds counting tv_nsec from 1e9 to 2e9-1 and is unavailable if the synchronization source has been interrupted for some time. This is for highly reliable timestamps (e.g., financial transaction systems, timestamping services, etc.) CLOCK_TAI shows TAI without any leap seconds and is unavailable if the synchronization source has been interrupted for some time. I don't expect many systems to have CLOCK_TAI available. This is for navigation systems, astronomers, geologists, etc. CLOCK_MONOTONIC shows an always available second counter that never jumps and that is not guaranteed to be related to any absolute time scale. This is what t1-t2 programmers should use. It can be identical to CLOCK_TAI if CLOCK_TAI was available at boot time, but it does not need to be. A typical PC implementation will probably read the CMOS clock at boot, interpret the time in there as UTC and then set CLOCK_MONOTONIC accordingly once and never correct its phase later. The system is allowed to adjust the frequency of CLOCK MONOTONIC by up to 200 ppm once the frequency error of the clock has been determined when external synchronization becomes available. CLOCK_REALTIME This is a best effort estimation of UTC that also takes t1-t2 usage in existing systems into account. This value is also returned by gettimeofday(). It is low-pass filtered to smooth out leap second phase jumps over a couple of minutes, and it continues to run freely even when CLOCK_UTC is unavailable. It is adjusted to CLOCK_UTC once available by changing its frequency within 1 %. If the system discovers that CLOCK_REALTIME - CLOCK_UTC is more than 1000 seconds off, then a syslog warning is issued and CLOCK_REALTIME brutally jumps to CLOCK_UTC without smooting time. This brutal jump should happen only once when the system is installed first. CLOCK_REALTIME is a compromise for backwards compatibility with existing practice of t1-t2 gettimeofday() usage. New applications should use CLOCK_MONOTONIC instead where available, because the rate of CLOCK_REALTIME can be up to 10000 ppm off while the rate of CLOCK_MONOTONIC is typically much better than 200 ppm. What is wrong with this clock API? POSIX.1b provides already the interface for accessing several flavours of clocks, so why not just use it?
tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86_400 + (tm_year-70)*31_536_000 + ((tm_year - 69)/4)*86_400
Don't be an idiot. 2100 is not going to be a leap year.
Only an idiot would feel better after implementing leap second formula for 32-bit time_t. The above formula as quoted from POSIX.1 was only designed to work in the tm_year range 1970..2038 since time_t is on most systems a signed 32-bit integer. Implementing the correct leap year formula when using a 32-bit time_t would just demonstrate the programmer's ignorance of the int overflow. But don't worry, a whole army of S2G consultants is already waiting to set up the next generation of panic web pages once we have survived Y2K. :) Markus -- Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK email: mkuhn at acm.org, home page: <http://www.cl.cam.ac.uk/~mgk25/>

Markus Kuhn writes:
code that subtracts time usually does
False. Almost all time-handling code (1) computes a real-time difference (a ``timing'') as the difference of two current-time measurements or (2) converts a current-time measurement to a local-time display by calling the appropriate system library routine. Why do you persist in ignoring #1? (Answer: Because that code is not compatible with your religious views.)
t = t + "one day". [ ... ] what you get is the same time on the next day,
Nonsense. What's the ``same time on the next day'' after 23:59:60 UTC? Anyway, code of that type generally does not mean what you say it means. See http://pobox.com/~djb/docs/time/01.txt for an example.
So NTP does not provide the information to convert reliably between UTC and TAI.
False. The client can keep its leap-second table continuously up to date, provided that the server uses the leap-second warnings properly. (The fragility of this system is one of the obvious reasons that a new protocol would be a good idea.)
gettimeofday(). It is low-pass filtered to smooth out leap second phase jumps over a couple of minutes, [ ... ] What is wrong with this clock API?
It's shoddy engineering. You're producing occasional 0.8% errors in timings and occasional 1-second errors in local-time displays.
Implementing the correct leap year formula when using a 32-bit time_t would just demonstrate the programmer's ignorance of the int overflow.
Using the wrong formula is incredibly shortsighted. Many systems will switch to a 64-bit time_t within the next few years. ---Dan
participants (2)
-
D. J. Bernstein
-
Markus Kuhn