On Feb 5, 2018, at 1:38 PM, Paul Eggert <eggert@cs.ucla.edu> wrote:
On 02/05/2018 10:21 AM, Howard Hinnant wrote:
On Feb 5, 2018, at 1:01 PM, Paul Eggert <eggert@cs.ucla.edu> wrote: >> >> In that case, how about if we follow POSIX's lead and specify nanosecond resolution as the highest the format supports? Although that's likely overkill, it does match a widely used standard; and better overkill than underkill. > > On Feb 4, 2018, at 7:21 PM, Howard Hinnant <howard.hinnant@gmail.com> wrote: >> >> In choosing a finest supported precision, I would encourage the choice of something coarser than nanoseconds. Suppose an old UT offset uses sexagesimal notation, or something derived from it? In that case, the exact offset might not be representable as a decimal number, and the nanoseconds resolution will provide a comfortable excess of precision. Sexagesimal is not entirely hypothetical, as we have good evidence that civil time in Vietnam from 1906 to 1911 was 104° 17′ 17″ east of Paris.
I guess I'm not seeing the harm to go with nanoseconds in the data format; if a downstream user wants less precision they can easily round. And following Steve Allen's lead, we can mention in the documentation that there's no practical use of sub-millisecond precision in these old timestamps.
If two clients (different platforms) want to maintain the invariant that equal time_points remain equal after mapping, then they must operate at the precision of the mapping (or finer). I can not understate the importance of maintaining this invariant, not just for a single application, but for disparate applications built in different programming languages, using different tzdb compilers and running on different computers. A downstream user can not choose less precision than the IANA mapping, and maintain this invariant. Howard