
Paul Eggert wrote on 1998-10-07 08:45 UTC:
It would also completely mess up the elegance of my API in that suddenly strfxtime and xtime_make/breakup would have to know from what type of clock this timestamp came.
strfxtime and the other primitives currently return garbage unless you pass them a TIME_UTC timestamp. I suppose one could argue that they return well-defined garbage; but this well-defined garbage is useless for practical applications.
No, they return useful values if you pass them TIME_TAI or TIME_LOCAL (both of which have a 1970 epoch as well), and specify (timezone_t * NULL) as the other parameter. Only if you pass a TIME_MONOTONIC value, you might get real garbage (except on those systems where TIME_MONOTONIC is a best effort TAI approximation, which is certainly allowed though not required). See my last message for more details.
This is a defect in the current proposal, and I plan to propose a fix soon.
No, I think you misunderstood the proposal here. What you inappropriately call "well-defined garbage" is fully intended to be useful for TIME_TAI or TIME_LOCAL.
Yes, but I see that I wasn't clear. Let me try to explain things better. I see three related problems with struct xtime in this area.
1. The struct xtime spec makes it difficult for applications to detect bogus or unknown leap seconds.
Suppose an application constructs a leap second timestamp (i.e. a struct xtime value with 1000000000<=nsec<2000000000), and passes this timestamp to an xtime primitive that expects a TIME_UTC timestamp.
I don't see why an application would want to construct a bogus leap second. I see no reason to do this and I feel comfortable leaving the behaviour undefined here (although I do propose a clear and safe semantic below using xtime_cmp()). Creating bogus leap seconds is like dividing by zero: Whatever will happen is your fault anyway. Garbage in, garbage out. If the sec field is 59 mod 60, then there exists at least at least a way to display the leap second, and if the sec field is 83599 mod 84600, then the leap second smells already genuine since it comes at the end of a UTC day. Both tests are trivial.
Then the behavior is undefined unless the timestamp is not bogus (i.e. it is within a true inserted leap second) _and_ the timestamp is known (i.e. the implementation knows about the leap second). (The current spec doesn't say all this explicitly, but it certainly implies it.)
Just to make sure: We are only talking about xtime_conv and tz_jump here, right? The other functions do not care about bogus leap seconds as long as they come at the end of a minute (sec = 59 (mod 60)).
Therefore, when importing a textual leap-second timestamp from the outside world, a struct xtime application can't blindly apply xtime_make and use the resulting value, as xtime_make might generate a bogus or unknown leap-second timestamp, which will cause problems when given to other primitives.
No, a textual representation only can create a sec = 59 (mod 60) bogus leap second, and they can be adequately displayed, so no information will be lost.
So the application needs a way to find out whether a purported leap second is a known leap second. It can't pass the purported leap second directly to tz_jump, because if it's a bogus leap second tz_jump has undefined behavior. I can think of hacky workarounds to this problem, but this is getting way too complicated; there ought to be a simpler way to attack this problem.
If you are concerned about the precise definition of tz_jump() for pathological input: My suggestion is to use xtime_cmp() in order to define a total order on all know leap seconds and the parameter timestamp, so as to locate the next higher/lower known leap second in a sorted list of leap seconds, and voila, you immediately have a clean semantic of where the next one is, even in the context of completely arbitrary xtime values (not only bogus leap seconds, but also for out of range nsec values). Would this be acceptable for you?
One way to fix this particular problem would be to change the spec so that every primitive reports an error when given an bogus or unknown leap second.
No, this is not at all a good idea. It adds a lot of unnecessary check mechanics, and it messes up the operation on systems that are informed by external clocks on the fly that there is now a leap second coming up, but then then forget quickly about leap seconds and do not store any leap seconds (because they are for instance embedded controllers with no non-volatile memory). Just use xtime_cmp() to compare timestamps, and build tz_jump completely on top of xtime_cmp.
2. The struct xtime spec is an awkward interface to implementations based on TAI that do not have full leap second tables.
Suppose implementation X internally uses a clock with TAI seconds, but it doesn't know the TAI epoch;
OK, I assume your "internal clock with TAI seconds" could be one that I could pass directly to the user via TIME_MONOTONIC, and the epoch could for instance be a "few months ago" (say system installation time).
that is, it knows the leap second or seconds within a window of a few months around the current time, and it knows the relationship between its internal clock and UTC within this window, but it doesn't know all the leap seconds back to 1972. This seems to be the kind of implementation that is motivating your leap-second design, but I see a couple of problems with how struct xtime would interface to implementation X.
First, implementation X can't support struct xtime's TIME_TAI, as this requires knowledge of the TAI epoch.
If we don't know the current TIME_UTC-TIME_TAI and also not the current TIME_MONOTONIC-TIME_TAI, then there is very little we can do to provide TAI at all to the user, since we obviously just do not have it available.
This seems backwards to me; fundamentally, the implementation is based on TAI not UTC, and it should have some way to report this to the application.
How can the implementation be cased fundamentally on TAI but not know the TAI epoch? Please make sure you have fully understood the difference between TIME_MONOTONIC and TIME_TAI. Both are very similar in that they do not have leap seconds, and the only difference is that TIME_TAI has a known epoch, while TIME_MONOTONIC has not. I think you are thinking about a (quite reasonable) implementation, that runs its internal clocks on a TIME_MONOTONIC that had its epoch when someone inserted the batteries into the device and that was occationally frequency calibrated when UTC became available over a longer perioid of time. I think I have carefully thought about the application scenario you refer to, and you just haven't understood yet that what you really want here is TIME_MONOTONIC (the TIME_TAI without known epoch) and not TIME_TAI.
Second, implementation X can't convert timestamps outside its window to internal format, so it will reject attempts to invoke primitives involving TIME_UTC timestamps outside its window, as the internal format of those timestamps would be undefined.
Please tell me exactly which operations in my API you think would fail. Remember that only tz_jump and xtime_conv depend on the leap second history, all other functions do not care about the leap second history and will never fail because of a lack of leap second information. Sure, if you know only all leap seconds since you inserted the batteries and TIME_MONOTONIC started, then xtime_conv will not allow you to convert a TIME_UTC value from before you inserted the batteries to a negative TIME_MONOTONIC value. But why would you want to do this, since you never received a corresponding clock value from the clock anyway? And as far as calendar and timezone calculations are concerned: just stay in TIME_UTC. There is no reason to ever convert to your internal TIME_MONOTONIC timescale as long as your internal clocks are not involved in any way in the calculations. If your internal clocks get involved, use xtime_conv() to convert from TIME_MONOTONIC to TIME_UTC and everything is alright again. I don't think you have found a problem. Actually, the discussion just convinces me more and more about the flexibility and completeness of the xtime API.
This seems counterintuitive to me. From the user's point of view, we'd have a system that claims to support TIME_UTC but can't convert the out-of-window value 1970-01-01 00:00:00 UTC to a TIME_UTC timestamp, even though the result is perfectly well-defined to be zero (the problem being that the result's _internal form_ is unknown).
Please be more precise and explain, which function you think would fail.
A more natural way to support such an implementation is to have the struct xtime interface supply a generalization of TIME_TAI, one with an implementation-defined epoch.
That is why TIME_MONOTONIC does exist. But for UTC related display, it becomes only useful once the TIME_MONOTONIC to TIME_UTC conversion has become possible.
This generalization would provide a more natural interface to this class of implementations, and would also support systems with complete knowledge of past leap seconds.
3. The struct xtime spec does not support applications that do not want to know about leap seconds and do not ever want to see them. This is a large class of applications, and POSIX.1 (as well as many important non-POSIX OSes) support such applications, but the struct xtime spec doesn't support them at all.
This can be fixed by adding a variant of TIME_UTC that never reports leap seconds.
First of all, the existing POSIX functions as well as BSD's gettimeofday are continuing to exist. There are several methods of handling leap seconds in them. One is to block until the leap second is over, the other is to send the time through a low-pass filter, i.e. to slow down the clock by say 1% until UTC has caught up with this leap-second filtered time. (The latter is exactly clocks that get their time base from the power grid do.) I am not sure we should offer this in a new interface, especially since there are so many ways of doing this. You can always call xtime_delay yourself if you are inside a leap second and call xtime_get again afterwards. Then you know at least precisely what you get and do not depend on probably not fully specified filtering algorithms. If you want to get a special filter (e.g., the US 59 Hz power grid leap rule or the BSD 10000 ppm adjtimex() rule), then you can implement easily this specific rule using tz_jump (just do: if the last leap second is less than 60 s away, add a linear compensation ramp (delta_t / 60.0) to the time).
If you combine solutions (2) and (3), then you don't need the current TIME_UTC any more. It's enough to have the generalization of TIME_TAI with implementation-defined epoch, along with the variant of TIME_UTC that never reports leap seconds.
So I can't get the current UTC any more directly from the source? And all conversion functions suddenly depend on the leap second table and on the implementation defined epoch, including all the calculation overhead and reliability concerns involved with this? I have yet to read your final proposal to make a judgement, but this sounds not very promising and seems to reintroduce all the problems that I was so proud of having gotten rid of (if it is the completely leap-second table state dependent system that I fear it is). Do you really want to send every time stamp on its way from the external UTC reference to the output formatting routine several times through a leap second table just to make the SI second difftime implementation equivalent to a subtraction? Markus -- Markus G. Kuhn, Security Group, Computer Lab, Cambridge University, UK email: mkuhn at acm.org, home page: <http://www.cl.cam.ac.uk/~mgk25/>