
Date: Wed, 07 Oct 1998 10:02:04 +0100 From: Markus Kuhn <Markus.Kuhn@cl.cam.ac.uk> If you have a TAI timestamp and what to print it as a TAI timestamp, then you just pass to strfxtime the TAI timestamp and specify NULL as the time zone. First, this doesn't always work under the current struct xtime proposal. E.g. strfxtime with a NULL time zone expands "%Z" to something like "UTC", which isn't correct for TAI timestamps. Second, even assuming the users avoids buggy formats like "%Z", the resulting information is useless in practice. Who wants broken-down TAI times? At bottom, TAI is an integer, not a Gregorian calendar date-time indication. It is much more useful to convert the TAI timestamps to broken-down civil times. Third, I agree that for TIME_LOCAL, strfxtime etc. return useful information in most cases, but the buggy formats like %Z are still buggy. This should get fixed. Fourth, for TIME_MONOTONIC, TIME_PROCESS, or TIME_THREAD, strfxtime etc. return garbage, no matter how you slice it. So more work is needed here; this area is too confusing and error-prone. I think this is a much nicer embedding of both UTC and TAI into the scheme of things than having to pass on to xtime_breakup(), xtime_make(), and strfxtime() a separate parameter that tells whether we are using a UTC or a TAI timestamp. You don't need to pass a clock type to those functions to get things to work correctly; all you need to do is pass a clock type to tz_prep. tz_prep can store the clock type information somewhere in the timezone_t value.
Suppose an application constructs a leap second timestamp (i.e. a struct xtime value with 1000000000<=nsec<2000000000), and passes this timestamp to an xtime primitive that expects a TIME_UTC timestamp.
I don't see why an application would want to construct a bogus leap second. I see no reason to do this One scenario is that the application is importing textual data that ends in :60. The textual data might well be bogus, and the application should be able to check it. This sort of thing is quite common in portable applications that exchange timestamps. If the sec field is 59 mod 60, then there exists at least at least a way to display the leap second, and if the sec field is 83599 mod 84600, then the leap second smells already genuine since it comes at the end of a UTC day. Both tests are trivial. Yes, it's easy to detect _many_ bogus timestamps, but that's not sufficient. You need a way to detect _all_ bogus timestamps. And if the application doesn't have a full leap second table, then you need a way to determine whether the timestamp is known to be bogus, known to be valid, or neither.
Then the behavior is undefined unless the timestamp is not bogus (i.e. it is within a true inserted leap second) _and_ the timestamp is known (i.e. the implementation knows about the leap second). (The current spec doesn't say all this explicitly, but it certainly implies it.)
Just to make sure: We are only talking about xtime_conv and tz_jump here, right? The other functions do not care about bogus leap seconds as long as they come at the end of a minute (sec = 59 (mod 60)). No, I'm talking about all the functions. You write ``the other functions do not care'' because you have a particular implementation in mind that allows bogus leap seconds without checking them. But it's unwise for the spec to _require_ such implementations. The spec should allow higher-quality implementations that do not allow bogus leap seconds at all. Such implementations currently include the Olson code (in "right" mode) and Bernstein's libtai.
So the application needs a way to find out whether a purported leap second is a known leap second....
My suggestion is to use xtime_cmp() in order to define a total order on all know leap seconds and the parameter timestamp, so as to locate the next higher/lower known leap second in a sorted list of leap seconds Sorry, I don't understand this suggestion. Are you proposing that the program initially create a table of known leap seconds by iterating through tz_jump? This sounds awkward, and anyway it won't work reliably in long-running programs, as new leap seconds may become known to the implementation during execution.
One way to fix this particular problem would be to change the spec so that every primitive reports an error when given an bogus or unknown leap second.
No, this is not at all a good idea. I tend to agree; I was proposing it as a possible fix, but suggested a better fix later (the generalized TIME_TAI and modified TIME_UTC discussed below). However, regardless of whether the fix below is adopted, I think the implementation should be allowed to report an error when it detects a bogus or unknown leap second. Just use xtime_cmp() to compare timestamps, and build tz_jump completely on top of xtime_cmp. Sorry, I don't follow this suggestion. You can implement UTC tz_jump by iterating through all potential leap seconds starting from 1972 and ending one year from now; you don't need xtime_cmp. Implementing local-time tz_jump is much harder; in principle you need to iterate through every second (though in practice 8-day iterations suffice). But I don't see why the implementer would build tz_jump that way, unless he was layered above another level that didn't disclose its leap seconds directly. So I don't see where your suggestion is headed.
Suppose implementation X internally uses a clock with TAI seconds, but it doesn't know the TAI epoch;
OK, I assume your "internal clock with TAI seconds" could be one that I could pass directly to the user via TIME_MONOTONIC, Yes. and the epoch could for instance be a "few months ago" (say system installation time). Only if the installation time is at a TAI second boundary, which is unlikely.
First, implementation X can't support struct xtime's TIME_TAI, as this requires knowledge of the TAI epoch.
If we don't know the current TIME_UTC-TIME_TAI and also not the current TIME_MONOTONIC-TIME_TAI, then there is very little we can do to provide TAI at all to the user, Yes you can! You can provide a clock that ticks TAI seconds with an unknown epoch. You can also convert this clock to UTC, so long as the requested times are within the window of known leap seconds. All this is useful information.
This seems backwards to me; fundamentally, the implementation is based on TAI not UTC, and it should have some way to report this to the application.
How can the implementation be based fundamentally on TAI but not know the TAI epoch? Because it gets its timestamps from GPS? Please make sure you have fully understood the difference between TIME_MONOTONIC and TIME_TAI. Both are very similar in that they do not have leap seconds, and the only difference is that TIME_TAI has a known epoch, while TIME_MONOTONIC has not. That's not the only difference! There are two others: (1) TIME_MONOTONIC is monotonically nondecreasing (hence its name), whereas TIME_TAI can go backwards when the clock is reset. (2) TIME_MONOTONIC's seconds may start right in the middle of a TAI second with no problem, whereas TIME_TAI's seconds are supposed to be synchronous with true TAI seconds (perhaps with some small error). I think you are thinking about a (quite reasonable) implementation, that runs its internal clocks on a TIME_MONOTONIC that had its epoch when someone inserted the batteries into the device and that was occationally frequency calibrated when UTC became available over a longer perioid of time. No, I'm assuming something stronger than that for TIME_TAI. I'm assuming not only that it became frequency calibrated, but also that it became ``phase-calibrated'' (sorry, I don't know the lingo, but what I mean is that its second ticks should be very close to the true TAI second ticks), and also that for a certain window around the current time, the relationship between TIME_TAI and UTC is known.
From what you've written, these are both reasonable assumptions for the devices that you're talking about.
Second, implementation X can't convert timestamps outside its window to internal format, so it will reject attempts to invoke primitives involving TIME_UTC timestamps outside its window, as the internal format of those timestamps would be undefined.
Please tell me exactly which operations in my API you think would fail. All operations that take out-of-window struct xtime values would fail, because implementation X converts struct xtime to its internal format (TAI within a window) in order to do all operations. This is a reasonable implementation. Remember that only tz_jump and xtime_conv depend on the leap second history, all other functions do not care about the leap second history This is true of the implementation that you're thinking of, but I don't think the C standard should require such an implementation; it should also allow implementations that are fundamentally TAI based as described above. if you know only all leap seconds since you inserted the batteries and TIME_MONOTONIC started, then xtime_conv will not allow you to convert a TIME_UTC value from before you inserted the batteries to a negative TIME_MONOTONIC value. But why would you want to do this, since you never received a corresponding clock value from the clock anyway? Because you imported a leap-second timestamp from some other source, e.g. from a correspondent over the network using the draft IETF calendar spec.
3. The struct xtime spec does not support applications that do not want to know about leap seconds and do not ever want to see them...
First of all, the existing POSIX functions as well as BSD's gettimeofday are continuing to exist. Not all applications run on POSIX or BSD. And it's messy and error-prone for implementations to combine BSD gettimeofday or POSIX.1-1996 clock_gettime with struct xtime. We should give people a simple way to do common things. You can always call xtime_delay yourself if you are inside a leap second and call xtime_get again afterwards. This is the sort of hack that most users are unlikely to think of on their own -- and also it will impose unacceptable delays on some apps. We need a simple way to get the desired values, without delays. If you want to get a special filter (e.g., the US 59 Hz power grid leap rule or the BSD 10000 ppm adjtimex() rule), then you can implement easily this specific rule using tz_jump (just do: if the last leap second is less than 60 s away, add a linear compensation ramp (delta_t / 60.0) to the time). I thought that tz_jump was supposed to report only discontinuities. Aren't these filters continuous? Now that you mention it, though, it'd be nice if tz_jump (or some new primitive) also reported continuous changes to the TIME_TAI and TIME_UTC clocks as well, if that information is available. This is a good idea, and will better support apps that don't want to know about leap seconds. It is a reasonable extension, though offhand I don't think it needs to be in the standard.
If you combine solutions (2) and (3), then you don't need the current TIME_UTC any more. It's enough to have the generalization of TIME_TAI with implementation-defined epoch, along with the variant of TIME_UTC that never reports leap seconds.
So I can't get the current UTC any more directly from the source? Yes you can. I'm assuming that the actual source is a TAI clock with an unknown epoch, with a known relationship to UTC within a particular window. Within the window, one can say that ``the source'' is TAI or UTC -- it doesn't really matter, and it's a mere notational convenience to say one or the other. Outside the window, you can't convert clock values to broken-down UTC times no matter which notational convention you use. The differences between my proposal and yours are mostly ones of notational convenience; they aren't fundamental to the implementation. My basic argument is that it's much more convenient to base the clock on an integer than to base it on a structure containing a somewhat-broken-down time, with all its foibles. And all conversion functions suddenly depend on the leap second table and on the implementation defined epoch, including all the calculation overhead and reliability concerns involved with this? The (partial) leap second table and implementation-defined epoch is inherent to this sort of implementation. You can't properly support UTC leap seconds without it. In this respect, the ``calculation overhead'' and ``reliability concerns'' of integer-based clocks apply with equal force to struct xtime. Do you really want to send every time stamp on its way from the external UTC reference to the output formatting routine several times through a leap second table Going through a leap second table won't cost much, compared to all the other things that need to be done to convert. This is particularly true for implementations that know at most one leap second -- that table is pretty small. :-) If an app is really concerned about this miniscule overhead, it can avoid the overhead entirely by using TIME_UTC timestamps instead of TIME_TAI timestamps.