Mass media article on leap seconds

The 2004-03 issue of the U.S. monthly "Discover" has an article titled "Leap Seconds" by Karen Wright(pages 42-45, but the first two pages are décor and an introductory blurb). No mention of the Big Secret (that leap seconds have not been inserted since January of 1999). The article states that "...computer software designers haven't adapted very well to the occasional added second, so experts in air traffic control, satellite communications, and electronic fund transfers have been lobbying to abolish the tinkering. A leap second may have caused the Russian satellite navigation system to crash for hours, and critics claim the added instants could cause commercial airliners to crash as well." The article also notes a couple of modest proposals: "Leap seconds could be inserted every four years along with the February leap day...or leap minutes could be added every half century or so." (Either proposal, if adopted, would require changes in both POSIX and the public-domain time zone code.) Your correspondent's two cents: in setting up the time handling in UNIX, T&R got it exactly right with respect to springing forward and falling back when DST goes in to and out of effect--keep the computer counting monotonically and leave it to the software to translate the monotonic count into a representation of local time. What's right at the level of an hour is also right at the level of a second--keep the computer counting at one count per second, and leave it to software to figure out what should be displayed when the user asks what time it is. --ado

"Olson, Arthur David (NIH/NCI)" wrote on 2004-03-02 14:38 UTC:
The 2004-03 issue of the U.S. monthly "Discover" has an article titled "Leap Seconds" by Karen Wright(pages 42-45, but the first two pages are décor and an introductory blurb).
Links to 10 further recent mass-media articles on the leap second are collected at the end of http://www.cl.cam.ac.uk/~mgk25/time/leap/
also notes a couple of modest proposals: "Leap seconds could be inserted every four years along with the February leap day...or leap minutes could be added every half century or so." (Either proposal, if adopted, would require changes in both POSIX and the public-domain time zone code.)
In the ITU SRG 7A group that currently deliberates the future of UTC, these proposal have already been rejected as being impractical, more than a year ago. The only proposal left on the table is to drop leap seconds forever. This would detach the international reference time (currently called UTC) from the rotation of the Earth. The point where the new international reference time [the proposal calls it Temps International (TI)] would correspond to local mean solar time would very slowly accelerate eastwards, starting from Greenwich. One specific proposal is to replace UTC with TI in 2022 (50th birthday of UTC). At that time, UTC and TI will be identical, but TI will be a physical (that is based on the SI second, not on the angle of the Earth) time without leap seconds. TI is just TAI plus an offset for smooth transition in 2022. Local civilian times would be defined relative to TI, which takes over this role from UTC. UTC would no longer be maintained. Their TI offset of local civilian time zones would have to change every couple of hundred years to keep local civilian times in +/-1 h sync with daylight, but as well all know, local civilian times change far more frequently for political reasons anyway. These LCT changes could easily be implemented by dropping the repeated hour at the end of summer time every few hundred years (the first one in around 2600) in those time zones that have it.
Your correspondent's two cents: in setting up the time handling in UNIX, T&R got it exactly right with respect to springing forward and falling back when DST goes in to and out of effect--keep the computer counting monotonically and leave it to the software to translate the monotonic count into a representation of local time. What's right at the level of an hour is also right at the level of a second--keep the computer counting at one count per second, and leave it to software to figure out what should be displayed when the user asks what time it is.
I don't agree. There is the important difference that UTC is today very widely disseminated, whereas TAI is a curiosity only known to time geeks like us. Keeping a computer synched to something like TAI would only be practical in the real world if a leap-free timescale (e.g., the existing TAI or GPS time) were widely enough available, along with a regularly updated UTC-TAI offset table. Current time distribution services, however, provide only UTC in easily accessible form, therefore running machines in TAI would likely cause them to get the leap second offsets wrong due to out-of-date leap-second tables rather quickly. Their timestamps would quickly get an integer number of seconds wrong relative to the timestamps of machines with up-to-date leapsecond tables. In the present scheme of how we define local civilian times (namely relative to UTC), I believe that what POSIX does (namely making time_t an encoding of what a UTC clock displays) is the most practical compromise. It needs a bit fudging near a leap second but works reliably the rest of the time, without a need to maintain long-term state (leap-second tables). In case the TI proposal gets implemented, the problem would be gone, and the only long-term problem would be that TI and local times drift apart without limit in the long term. But it will be several millenia before the difference even reaches a single day, at which time the Gregorian calendar will have gone well beyond its best-before-date as well. Tables like tzdata with offsets between the international reference time and the LCTs would have to be maintained and updated in either case, but they are, in most parts of the world, much more stable than leap second tables. Markus -- Markus Kuhn, Computer Lab, Univ of Cambridge, GB http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__

On Wednesday, Mar 3, 2004, at 03:17 Australia/Sydney, Markus Kuhn wrote:
In the ITU SRG 7A group that currently deliberates the future of UTC, these proposal have already been rejected as being impractical, more than a year ago. The only proposal left on the table is to drop leap seconds forever. This would detach the international reference time (currently called UTC) from the rotation of the Earth. The point where the new international reference time [the proposal calls it Temps International (TI)] would correspond to local mean solar time would very slowly accelerate eastwards, starting from Greenwich. One specific proposal is to replace UTC with TI in 2022 (50th birthday of UTC). At that time, UTC and TI will be identical, but TI will be a physical (that is based on the SI second, not on the angle of the Earth) time without leap seconds. TI is just TAI plus an offset for smooth transition in 2022. Local civilian times would be defined relative to TI, which takes over this role from UTC. UTC would no longer be maintained. Their TI offset of local civilian time zones would have to change every couple of hundred years to keep local civilian times in +/-1 h sync with daylight, but as well all know, local civilian times change far more frequently for political reasons anyway. These LCT changes could easily be implemented by dropping the repeated hour at the end of summer time every few hundred years (the first one in around 2600) in those time zones that have it.
That would destroy the constant (plus of minus a fraction of a second) relationship between a maintained timescale and mean solar time at a given longitude. I could no longer look at a map (with longitude lines) and know immediately what local mean solar time is (just about exactly, at least where the longitude lines are). That would be sad, if nothing else.
Your correspondent's two cents: in setting up the time handling in UNIX, T&R got it exactly right with respect to springing forward and falling back when DST goes in to and out of effect--keep the computer counting monotonically and leave it to the software to translate the monotonic count into a representation of local time. What's right at the level of an hour is also right at the level of a second--keep the computer counting at one count per second, and leave it to software to figure out what should be displayed when the user asks what time it is.
Hear, hear! Bravo!
I don't agree. There is the important difference that UTC is today very widely disseminated, whereas TAI is a curiosity only known to time geeks like us. Keeping a computer synched to something like TAI would only be practical in the real world if a leap-free timescale (e.g., the existing TAI or GPS time) were widely enough available, along with a regularly updated UTC-TAI offset table. Current time distribution services, however, provide only UTC in easily accessible form, therefore running machines in TAI would likely cause them to get the leap second offsets wrong due to out-of-date leap-second tables rather quickly. Their timestamps would quickly get an integer number of seconds wrong relative to the timestamps of machines with up-to-date leapsecond tables.
1: Stop time distribution services from providing UTC and get them providing TAI instead. 2: Estimate and publish leap-second dates 30, 40, or 50 years in advance, thus maintaining a table of offsets for that many years into the future at all times. Poor estimates could be corrected later. The only casualty of this would be the sub-second difference between UTC and UT1 (or whichever it is). Yes, my quick inference of mean solar time from longitude would be slightly less accurate. And software will need to be updated anyway; lead time could be decades.
In the present scheme of how we define local civilian times (namely relative to UTC), I believe that what POSIX does (namely making time_t an encoding of what a UTC clock displays) is the most practical compromise. It needs a bit fudging near a leap second but works reliably the rest of the time, without a need to maintain long-term state (leap-second tables). In case the TI proposal gets implemented, the problem would be gone, and the only long-term problem would be that TI and local times drift apart without limit in the long term. But it will be several millenia before the difference even reaches a single day, at which time the Gregorian calendar will have gone well beyond its best-before-date as well. Tables like tzdata with offsets between the international reference time and the LCTs would have to be maintained and updated in either case, but they are, in most parts of the world, much more stable than leap second tables.
See 2 above. The calendar could also be the subject of similar treatment, substituting leap seconds with leap days and years (in advance) with centuries or millennia. Alex LIVINGSTON

Markus Kuhn writes:
Keeping a computer synched to something like TAI would only be practical in the real world if a leap-free timescale (e.g., the existing TAI or GPS time) were widely enough available, along with a regularly updated UTC-TAI offset table.
http://cr.yp.to/clockspeed.html converts from NTP's wobbly timescale to TAI, and sets the UNIX clock accordingly. The tz library, in ``right'' mode, then produces accurate local-time displays from the UNIX clock, even during leap seconds. Welcome to the real world. ---D. J. Bernstein, Associate Professor, Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago

"D. J. Bernstein" wrote on 2004-04-13 07:02 UTC:
Markus Kuhn writes:
Keeping a computer synched to something like TAI would only be practical in the real world if a leap-free timescale (e.g., the existing TAI or GPS time) were widely enough available, along with a regularly updated UTC-TAI offset table.
http://cr.yp.to/clockspeed.html converts from NTP's wobbly timescale to TAI, and sets the UNIX clock accordingly.
The tz library, in ``right'' mode, then produces accurate local-time displays from the UNIX clock, even during leap seconds.
We had this discussion many times before ... Such setups are badly vulnerable to disruption as soon as the leap-seconds tables on the various machines are not maintained properly any more. In the real world, where system-administrators are not time gurus, such things tend to be neglected, and then the local times get derived inaccurately from inaccurate versions of TAI and start to drift apart with each new leap second. My only point is that this potential for long-term error is in the real world for most applications much more of a problem than the occasional leap in an otherwise tightly synchronized POSIX-style UTC timescale. I am perfectly convinced that you and me and many others here are fully capable of maintaining leap second tables accurately today on our own systems and that the non-POSIX TAI-based time_t would work perfectly fine for us time gurus. However, I would not recommend it at present as general practice for the dirty real world outside the ivory tower, until TAI is just as widely disseminated as UTC is at the moment, as otherwise the local leap second tables needed for the various TAI<->UTC conversions in such setup are critical elements, that, if not maintained properly across a distributed system, can add several seconds of error to local time and synchronizity, which I believe to bee too disruptive to be worth the risk. I know of your libtai (http://cr.yp.to/libtai.html), Ed Davies' proposal to put leap-second tables onto the DNS (http://www.edavies.nildram.co.uk/ dns-leapseconds/), as well as Levine, J., and D. Mills. Using the Network Time Protocol to transmit International Atomic Time (TAI). Proc. Precision Time and Time Interval (PTTI) Applications and Planning Meeting (Reston VA, November 2000). http://www.eecis.udel.edu/~mills/database/papers/leapsecond.pdf That's all very nice and looks promising, but last time I looked, all these still looked to me more like experimental demonstration service, rather than something that I would recommend to everyone to make their critical infrastructure depend upon. For that I'd rather use multiple authenticated national official time services, and most of these (including for example our beloved MSF and DCF77 LF transmitters, the most widely used time synchronization sources in Europe) still give me only UTC without a leap second table at present (GPS being the notable exception). Should in your opinion the broad and relaible availability of TAI really have changed dramatically recently beyond the services and proposals outlines above, please let us know of such developments. Markus -- Markus Kuhn, Computer Lab, Univ of Cambridge, GB http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__

Markus Kuhn writes:
such things tend to be neglected, and then the local times get derived inaccurately
How come you aren't screaming about the much larger local-time errors that occur when tz updates are neglected? Those errors are thousands of times larger! Why aren't you proposing that UNIX time_t be local time? Eventually we'll all have the TAI-to-local-time tables automatically updated through the network. The existing software, even without perfect automation, already gives us accurate local-time displays---a basic test flunked by all of your proposals whenever leap seconds occur. ---D. J. Bernstein, Associate Professor, Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago

"D. J. Bernstein" wrote on 2004-04-13 20:17 UTC:
Markus Kuhn writes:
such things tend to be neglected, and then the local times get derived inaccurately
How come you aren't screaming about the much larger local-time errors that occur when tz updates are neglected?
Reason 1: Local time errors are larger than a few minutes and are therefore so severe and obvious that every user will scream instantly and therefore things get fixed rapidly. This is not the case with subminute errors, as they might be caused by out-of-date leap-second tables. Therefore, the latter worry me much more. Reason 2: Local-time errors affect critical long-term state such as file-system timestamps much less, as these tend to be in UTC. (Just an hour ago, we discovered a machine that had its default time zone set to US East Coast local time instead of London local time. We were able to fix that problem on-the-fly without any disruption, as most system state remained unaffected. The machine knew UTC accurately to within 2 ms all the time.)
Those errors are thousands of times larger!
Exactly. So they are thousands of times less likely to be missed.
Why aren't you proposing that UNIX time_t be local time?
See above.
Eventually we'll all have the TAI-to-local-time tables automatically updated through the network.
That would be nice and useful, in particular for the people who really need physical time (computers that control/monitor geophysical or astronomical instruments, navigations systems, etc.). It remains to be seen whether that infrastructure will be in place before ITU eliminates leap seconds from what we call today UTC. Markus -- Markus Kuhn, Computer Lab, Univ of Cambridge, GB http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__

Markus Kuhn scripsit:
Reason 2: Local-time errors affect critical long-term state such as file-system timestamps much less, as these tend to be in UTC. (Just an hour ago, we discovered a machine that had its default time zone set to US East Coast local time instead of London local time. We were able to fix that problem on-the-fly without any disruption, as most system state remained unaffected. The machine knew UTC accurately to within 2 ms all the time.)
Then again, there was my colleague's Windows system, which was still running on Redmond time instead of New York time. Unfortunately, we couldn't fix it, because the thousands of appointments, past and future, that he had stored in Outlook's calendar would then all be off by three hours. Consequently, his email tended to arrive from the future. -- It was dreary and wearisome. Cold clammy winter still held way in this forsaken country. The only green was the scum of livid weed on the dark greasy surfaces of the sullen waters. Dead grasses and rotting reeds loomed up in the mists like ragged shadows of long-forgotten summers. --"The Passage of the Marshes" http://www.ccil.org/~cowan

On Wed, Apr 14, 2004 at 09:17:24AM -0400, John Cowan wrote:
Markus Kuhn scripsit:
Reason 2: Local-time errors affect critical long-term state such as file-system timestamps much less, as these tend to be in UTC. (Just an hour ago, we discovered a machine that had its default time zone set to US East Coast local time instead of London local time. We were able to fix that problem on-the-fly without any disruption, as most system state remained unaffected. The machine knew UTC accurately to within 2 ms all the time.)
Then again, there was my colleague's Windows system, which was still running on Redmond time instead of New York time. Unfortunately, we couldn't fix it, because the thousands of appointments, past and future, that he had stored in Outlook's calendar would then all be off by three hours.
Consequently, his email tended to arrive from the future.
you can't fix that. unless you can convince microsoft of the value of utc time and a utc time_t sort of time stamp. i know they use something loopy like "64 bit count of milliseconds since jan 1, 1600" somewhere (or something like that), but a plain old time_t is a very handy thing. even if it only has a little less than 34 years left to live. -- |-----< "CODE WARRIOR" >-----| codewarrior@daemon.org * "ah! i see you have the internet twofsonet@graffiti.com (Andrew Brown) that goes *ping*!" werdna@squooshy.com * "information is power -- share the wealth."
participants (6)
-
Alex
-
Andrew Brown
-
D. J. Bernstein
-
John Cowan
-
Markus Kuhn
-
Olson, Arthur David (NIH/NCI)