John Cowan <cowan@drv.cbc.com>
... I favor the Java convention: a 64-bit signed integer representing milliseconds, with the same epoch as Unix. That provides sufficient resolution for normal purposes (anything that *requires* microsecond resolution probably requires microcode, embedded programming, or the like), and has a range clear back to the Carboniferous Period (~300 My B.P.)
Just for the record: The POSIX.1:1996 standard clocks provide nanosecond resolution [clock_gettimer(), clock_settimer(), clock_getres()], and modern processors like the Pentium contain 64-bit bus-cycle counters that can be used by applications and operating systems to generate time stamps with near nanosecond resolution. The state-of-the-art in host clock synchronization is that a SunOS system clock can be synchronized relative to UTC with better than one microsecond precision using a kernel PLL and a PPS input from a time reference (e.g., Frank Kardel has demonstrated this already some years ago with an informatik.uni-erlangen.de NTP server using both GPS and DCF77 receivers). Therefore, I consider anything less than 64-bit timestamps and nanosecond resolution in a new portable clock interface to be somewhat old-fashioned and ignorant about current hardware capabilities. POSIX uses struct timespec { long tv_sec; /* seconds */ long tv_nsec; /* nanoseconds */ }; where tv_nsec is in the range 0 to 999_999_999. Although POSIX does not yet provide for the following, an elegant aspect of this time representation is that during a positive leap second the range 1_000_000_000 to 1_999_999_999 could be used in the nanosecond field. Markus -- Dipl.-Inf. Markus Kuhn, Schlehenweg 9, D-91080 Uttenreuth, Germany mkuhn at acm.org, http://wwwcip.informatik.uni-erlangen.de/~mskuhn