Re: [tz] On ISO country for timezones (Re: Classifying IDs)
I believe you are missing an important part of this discussion.There is a strong need for a repository of pre 1970 data, especially for historical research applications. As this is the only tz database, tz has also become the main database for this older data.For any populated place on earth researchers need such a repository. If I want to add the pre 1970 tz history for Ottawa Canada where should it be stored? Since it is called tz, and it also has other pre 1970 data, tz seems the appropriate place.But if no simple method of adding open-ended historical data is negotiated here, then the only other option is to fork tz and work on it as a separate project.Secondly, since the number of locations that could eventually be supported in a historical tz database is potentially very large, it cannot not be micromanaged, and this suggests that each historical zone should use an international standard for naming - iso - for simplicity and clarity.Canada/Ontario/OttawaAre we going to have two tz databases, or just one. That is the question here.Sent from my Galaxy -------- Original message --------From: Steffen Nurpmeso via tz <tz@iana.org> Date: 2021-10-07 09:35 (GMT-05:00) To: Stephen Colebourne <scolebourne@joda.org> Cc: tz@iana.org Subject: Re: [tz] On ISO country for timezones (Re: Classifying IDs) Stephen Colebourne wrote in <CACzrW9Cau-q2e+yoBcg=7Rztu19zZN582Xca9ZMYbz=H_0iOpw@mail.gmail.com>: |On Thu, 7 Oct 2021 at 07:08, Watson Ladd via tz <tz@iana.org> wrote: ... |I agree with backwards compatibility. The primary concern here is |whether an ID is considered deprecated or not.IDs shall be stable and not change at all. At maximum they shouldbe moved to backward, for example if a zone is renamed (which mayhappen for example for Ukraine in a not too distant future, ifthat is still necessary then, and it looks as if it would be).I would always have been all for making infinite stability of IDsa documented assertion.Even if not, your "six months" claim is nothing but an aggressivestatement. I think this entire thread is shadowboxing and noisefor nothing.Looking at the interface of your JAVA framework i think getID()should simply return the correct ID, which may be a link if it isone, and just in case this is not what getID() already returns.In my opinion there are only two problems with IANA TZ and howPaul Eggert manages it as a maintainer. That is the same PaulEggert who contributes to this software since 1995, an astonishing26 years. Corroding the maintainership with a continuous streamof noise is disgusting.The first is combining of datasets to equal-post-1970 bundles,which is going on for many years. But the data is there, it is inbackzone, and everybody can easily install a complete TZ DB onone's own request. Yet noone did, even though many packagers areon this list. This is schizophrenic. But maybe it is onlybecause of black and yellow people do not matter as much as someblue-eyed white people from northern Europe, which does not trulycome as a surprise given the audience who possibly reads this now,and given the fact that colonianisation ended only about 55 yearsago, and de facto was only turned from armed political to armedmaterial oppression.The second is that documentation did not follow suit the codeimprovements, as has been recently shown for tzselect(8) and the-t option. That tzselect(8) uses the administrative zone1970.taband not the end-user-preferrable zone.tab is a different thing.If you want to enforce upon the maintainer of the TZ database thatthe pre-1970 data is joined back into the normal data, or that"backzone" is splitted into "backzone" and "unreliable", and/orthat "backzone" is included by default, and/or that "ZFLAGS='-r@0'" is made default, then create a thread and try to gain enoughhums, but please stop this subversion by spreading uncertainty andthat even upon topics which never were under discussion as far asi know, and i also read this list for a decade now.It must anyway be said it was nicer once i only had thedistribution and did not know about the list :)Ciao from Germany,--steffen||Der Kragenbaer, The moon bear,|der holt sich munter he cheerfully and one by one|einen nach dem anderen runter wa.ks himself off|(By Robert Gernhardt)
On Oct 7, 2021, at 11:47, dpatte via tz <tz@iana.org> wrote:
Are we going to have two tz databases, or just one. That is the question here.
This does appear to be one of the fundamental questions at this point. There are differences in the characteristics of the data pre- and post-1970. Post-1970, the data is rigorously researched and scoped to apply to a precise, quantifiable entity (often a geographical region, but with a few ‘meta-regions’ like ‘UTC’ and ‘EST5EDT’ thrown in). OTOH, pre-1970 data is characterized by a variable quality of research (ranging from quite rigorous to approaching outright guessing), with the precise scope of applicability that often seems to amount to little more than ¯\_(ツ)_/¯. Simply on the basis of the amount of discussion that took place in the few weeks leading up to the release of 2021b, these differences are large enough that I think there’d be real merit to instituting some sort of formal partitioning between the two data sets. Though of course it’d actually be a decision for downstream maintainers, I think a strong argument could be made that Operating System-level services should contain only post-1970 data. If you are a user who cares about data pre-1970, then I think you really need to be accessing the TZDB direct through an appropriate API rather than relying on whatever the OS maintainers have chosen to package. I envision two distinct ‘products’ here: one that I’ll call ’TZ Main’, which would be strictly limited to timestamps at or before 1970-01-01 00:00:00+00:00, and ’TZ Historical’, which would consist of ’TZ Main’ + whatever additional data pre-1970 that exists. While this sort of partitioning does exist in the current practice of supporting things like ‘backzone’, there are some real downsides to that approach; the primary one being the relative obscurity of such options to many users, particularly those who don’t use TZ’s makefile in their workflow. I’m wondering if it might make sense to extend such partitioning to the point where there would be two distinct data tarballs generated for each release; one for each ‘product'. That would make it crystal clear to everyone exactly what it is that they are getting. The primary downside here of course would be lots of data churn and extra work during the transition, especially for those downstream users who need historical data. In the long terms however, it would surely aid the process of ensuring that the alarm clock on a bajillion mobile devices consistently go off at the right time. Cheers! |---------------------------------------------------------------------| | Frederick F. Gleason, Jr. | Chief Developer | | | Paravel Systems | |---------------------------------------------------------------------| | A room without books is like a body without a soul. | | | | -- Cicero | |---------------------------------------------------------------------|
participants (2)
-
dpatte -
Fred Gleason