Date: Tue, 19 Aug 97 14:13:13 -0400 From: Tom Peterson (USG) <tomp@zk3.dec.com> In mktime(), or rather time2(), I noticed that a change was made a while back which removed the following seconds normalization: ! if (yourtm.tm_sec >= SECSPERMIN + 2 || yourtm.tm_sec < 0) ! normalize(&yourtm.tm_min, &yourtm.tm_sec, SECSPERMIN); Instead, these seconds are now saved aside into saved_seconds and only added back in after an appropriate match is found using the remaining normalized data. This was to handle leap seconds properly. On systems that support leap seocnds, it's incorrect to normalize the seconds count first, since you don't know how many seconds there are per minute until you determine which minutes you're talking about. For example, with leap second support, mktime should adjust an input value of ``1997-07-01 00:00:-1'' UTC (i.e. tm_sec == -1) so that it becomes 1997-06-30 23:59:60 UTC, but with the code above, mktime generated 1997-06-30 23:59:59 UTC which is incorrect. Here's what UNIX98 has to say regarding this:
From your quote it appears that UNIX98 says the same thing that the C Standard says, which is to say, not much. The C Standard spec for mktime is terribly ambiguous and you've found one of the ambiguities. The C Standard says that mktime can return -1 if attempted in the ``spring forward gap'', but it doesn't say how mktime should determine whether one is in the gap.
Any user program that messes in this area is relying on behavior that is not guaranteed by any standard. For what it's worth, the GNU C library's mktime agrees with the tz mktime on your example, whereas Solaris 2.5.1 and BSD/OS 3.0 have the other reasonable interpretation (i.e. both mktime invocations yield 575002800).