] Much discussion about the representation of timestamps, the ] inability to predict future leap seconds, and the effects of ] out-of-date leap-second tables, I think if we all stipulate that some form of "display UTC" is the only representation that should be stored or forwarded, then some of the objections will disappear. If anyone wants to define such a "display UTC" representation for files or communication protocols, so be it. [Personally, I would just use an ISO string and eat the conversion and storage costs.] Otherwise, the local representation of timestamps is a local issue, and we should just be specifying an API to manipulate such timestamps in useful ways. [I'm not sure if making them arithmetic would be a advantageous step in that process.]
Bradley White writes:
I think if we all stipulate that some form of "display UTC" is the only representation that should be stored or forwarded,
You expect high-volume logging tools and network servers to convert every timestamp to year-month-day-hour-minute-second-subsecond? And you expect every program that subtracts timestamps to convert this year-month-day-hour-minute-second-subsecond back to numeric time? What exactly is the benefit of these conversions? The UNIX approach is different. Numeric timestamps are used whenever possible. Programs that don't talk to users don't have to worry about complicated civil times. ``Keep it simple, stupid.''
[Personally, I would just use an ISO string and eat the conversion and storage costs.]
libtai supports ISO format, of course, but you're kidding yourself if you expect UNIX filesystems to start storing inode times in that format. ---Dan 1000 recipients, 28.8 modem, 10 seconds. http://pobox.com/~djb/qmail/mini.html
participants (2)
-
Bradley White -
D. J. Bernstein