
Date: Sun, 04 Oct 1998 11:16:52 +0100 From: Markus Kuhn <Markus.Kuhn@cl.cam.ac.uk> One of the fundamental concepts behind my API is exactly to make sure that there are only two functions, which would access to a leap second table, and both are optional, that is are always allowed to return that they have no information available. These two functions are xtime_conv() and tz_jump(). Sorry, I don't follow you here. In your spec, if xtime_get with TIME_UTC returns a time with nsec>=1000000000, then the application has obtained a leap second; so xtime_get's implementation must have accessed a leap second table of _some_ sort. Normally, one would think that an implementation has only two choices: (1) Support leap seconds (e.g. the Olson code in "right" mode, or Bernstein's library); or (2) Omit leap seconds (e.g. POSIX.1). It sounds like you're trying to allow for another possibility: (3) Support just some leap seconds, without having a complete leap second table. For example, it sounds like you want to cater to implementations that know only the leap second nearest to the present, or something like that. The leap second table is _always_ incomplete, of course, since we don't know all future leap seconds; so this problem of incomplete knowledge of leap seconds is inherent to any high-quality interface. So I suggest that we give the programmer a way to access to the entire leap second table known to the implementation. E.g. if the implementation knows only the next leap second, then it would return ``I don't know'' for questions about later (or previous) leap seconds. This seems to me to be the most natural way to model implementations of type (1), (2) and (3). By the way, I'm not familiar with type (3) C implementations in practice -- which ones are you thinking of? It would be helpful to have a bit more familiarity with the real-world issues here. (Also, this issue needs to be discussed better in the rationale!) leap second tables are a very dubious and extremely dangerous concept. A implementation must have a leap-second table of _some_ sort, if only a partial one, if it wants to support leap seconds; otherwise, xtime_get with TIME_UTC can't return leap seconds. I found the nsec overflow to be slightly simpler to implement and slighly more intuitive. In addition, it does not waste any bits if only non-UTC clocks are used. No, struct xtime normally wastes more than two bits, since it puts a nanosecond value (< 1000000000) into a member that can store numbers more than twice as large (< 2147483648), and also wastes a sign bit. Even with TIME_UTC, struct xtime wastes more than one bit. Here's an encoding that uses about .00001 bit per timestamp, assuming an integer counter is used for timestamps: If T is a TIME_UTC timestamp, it identifies the point of time that is T%D xtime intervals after T/D UTC days after the epoch, where D is 86401*XTIMES_PER_SEC. For space-saving this beats all other encodings proposed so far. In the other proposals for handling leap seconds that I have seen, almost everything depends on correct leap second tables (especially in the "let's only use TAI" proposals) I also would not like to require correct leap second tables. But I hope that we can do better than the current proposal. I think the example code I have played around with looks neater. It would be helpful if you could publish the sample code, when it's ready. I have considered many alternatives I know, but we haven't exhausted the alternatives yet! A lot more thinking needs to be done with this proposal.