On 2024-01-21 22:45, Paul Eggert wrote:
On 2024-01-21 20:36, Brian.Inglis--- via tz wrote:
(Why 64 bits? Surely 60 bits would have been enough for real-world timestamps....)
In decimal integers, that only supports up to:
$ date -d@999999999999999 +%Y 31690708
perhaps they wanted to ensure they could support up to:
$ date -d@9999999999999999 +%Y 316889355
;^p
I'm not sure where those decimal integers came from. A 60-bit time_t, if signed, would support a time_t range of about -5.8e+17 .. 5.8e+17 seconds. The universe is about 13.8e+9 years or about 4.4e+17 seconds old, so a 60-bit signed time_t would cover the known universe's history so far, with a goodly amount of room for the future.
Does the specification prohibit defining time_t using decimal types supported on legacy mainframes? ;^>
Perhaps the POSIX standardizers were thinking that 18 billion years of future timestamps aren't enough, and that some apps need support for at least 292 billion years into the future. But what applications were they thinking of?
Also on typical platforms where int is 32 bits, localtime stops working for time_t values greater than around 6.8e+16, so even 60-bit time_t is overkill for today's platforms.
Also, the earth's rotation will become incompatible with POSIX long before 60-bit time_t rolls over....
AI will predict those dates, so they do not have to change the standard. ;^> -- Take care. Thanks, Brian Inglis Calgary, Alberta, Canada La perfection est atteinte Perfection is achieved non pas lorsqu'il n'y a plus rien à ajouter not when there is no more to add mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut -- Antoine de Saint-Exupéry