I suggest that you put a BIG warning around the use of 'leapseconds' in time calculations. For a number of reasons, POSIX (IEEE standard Unix, soon to be ANSI and ISO standard unix) prohibits the standard conversion to/from "Seconds since the Epoch" functions from taking leapseconds into account.
I think we came to the conclusion before doing the leapsecond stuff that the wording in POSIX *requires* that standard conversion routines take leapseconds into account.
Just to be sure we have communicated... POSIX really does state that "seconds since the Epoch" explicitly excludes leap seconds in the calculation. Since all current POSIX time conversion functions and time_t timestamps derive their value from "seconds since the Epoch", no POSIX function can make use of leapseconds. This is not to say that a later draft could not add NEW functions that do take leap seconds into account. But since P1003.1 is NOW in FINAL BALLOT, it will be a long time before such a standard function would show up. The impact of adding leapseconds as you have done can be more than a simply 'my clock is 14 seconds slow'. Networks of systems or multi-os systems or multi-cpu systems could have a different time_t value for the current time. These systems could convert the same time_t value into a different string. Make can get messed up, transaction time stamps can be wrong, etc... That is why I suggested that you point a big warning about not using leapseconds. No system using them can ever be POSIX (or SVID since it will conform to POSIX) and no system as shipped by a major (or minor?) computer builder uses them. Is patch level 1 your current patch? chongo <> /\oo/\
Just to be sure we have communicated...
POSIX really does state that "seconds since the Epoch" explicitly excludes leap seconds in the calculation. Since all current POSIX time conversion functions and time_t timestamps derive their value from "seconds since the Epoch", no POSIX function can make use of leapseconds.
Yes, I understand. Perhaps I should explain myself a little more clearly though. My "seconds since the Epoch" *does* explicitly exclude leap seconds in its calculation - it just keeps ticking, second to second, no matter what. If I set my clock to 11:32 local time, then my "seconds since the Epoch" is exactly right. However, if you set your clock at the same time, then your "seconds since the Epoch" will be 14 less than mine, and 14 less than the actual number of seconds since the Epoch. You have included leap seconds in your calculation by ignoring them! Leap seconds are a reality. Hence, excluding them requires accounting for them.
The impact of adding leapseconds as you have done can be more than a simply 'my clock is 14 seconds slow'. Networks of systems or multi-os systems or multi-cpu systems could have a different time_t value for the current time.
I agree, all systems should have the same time_t value during the same second. Furthermore all systems should have the successor of that value in the next second. If one system continues unchanged over a leapsecond while you decrement your time_t, *then* they will have different time_t's.
These systems could convert the same time_t value into a different string.
This is immaterial. Different systems already convert the same time_t into different strings depending upon which timezone they are in, whether they have daylight savings during different parts of the year, etc etc. In fact, the adotime leapsecond code is *only* concerned with time_t <=> struct tm decoding and encoding, and hence immaterial. The issue is that leap seconds should not change the constant and consistent ticking of "seconds since the epoch", as your reading of the standard implies.
Make can get messed up, transaction time stamps can be wrong, etc...
No. If each system just keeps ticking then everything will be consistent (and "correct").
Is patch level 1 your current patch?
I assume you received ado's reply. He is the source source. Anyway chongo, you are undoubtedly much more familiar with the standards than I, so there may be other things that are swaying your opinion. All I am saying is that there can be (many) different interpretations of the same text. In this case I think that the above quote implies behaviour along the lines of adotime leapseconds. brad
Networks of systems or multi-os systems or multi-cpu systems could have a different time_t value for the current time.
Just as an aside, UNIX time_t values are not an appropriate representation of time in a distributed system. An ASCII rendition of UT is much better. As such, it doesn't really matter that some systems are accounting for leap seconds or not(*), just like it doesn't matter if some are DEC System-20's with a different base representation. Each system only needs to convert to and from its own representation. brad (*) those that don't understand leap seconds may have some trouble with "87/12/31 23:59:60", although I wouldn't be surprised if they just ended up with "87/12/31 23:59:59"+1.
participants (2)
-
Bradley White -
cvl!uts.amdahl.com!cvl!chongo