tzdata2016g missing version information

As is known, the version number no longer appears in the Makefile. The release notes in NEWS say:
To support the more-accurate version number, its specification has moved from a line in the Makefile to a new source file 'version’.
The version file does not appear in the uncompressed tzdata2016g database. This is hurting my ability to report the version of the database to my clients. Can we please put a version file in the database? Howard

On Sep 28, 2016, at 2:28 PM, Howard Hinnant <howard.hinnant@gmail.com> wrote:
As is known, the version number no longer appears in the Makefile. The release notes in NEWS say:
To support the more-accurate version number, its specification has moved from a line in the Makefile to a new source file 'version’.
The version file does not appear in the uncompressed tzdata2016g database. This is hurting my ability to report the version of the database to my clients. Can we please put a version file in the database?
Though I suppose I could scrape the version number out of the NEWS file… It would be nice to have an agreed upon *stable* place to read the version. Howard

On Wed, Sep 28, 2016 at 2:33 PM, Howard Hinnant <howard.hinnant@gmail.com> wrote:
The version file does not appear in the uncompressed tzdata2016g database. This is hurting my ability to report the version of the database to my clients. Can we please put a version file in the database?
Though I suppose I could scrape the version number out of the NEWS file…
Why can't you deduce the version from the name of the file? But I agree: non-git based distributions should include the generated version file.

On Sep 28, 2016, at 2:41 PM, Alexander Belopolsky <alexander.belopolsky@gmail.com> wrote:
On Wed, Sep 28, 2016 at 2:33 PM, Howard Hinnant <howard.hinnant@gmail.com> wrote:
The version file does not appear in the uncompressed tzdata2016g database. This is hurting my ability to report the version of the database to my clients. Can we please put a version file in the database?
Though I suppose I could scrape the version number out of the NEWS file…
Why can't you deduce the version from the name of the file? But I agree: non-git based distributions should include the generated version file.
I don’t want to keep the tarball around. I just have the expanded tzdata directory and its contents. Howard

Yup, Noda Time does the same thing where possible: https://github.com/nodatime/nodatime/blob/master/src/NodaTime.TzdbCompiler/T... (I do infer the version from the filename if we have it.) But yes, another vote for a stable, easily-machine-readable place for the version number to be included in the data tar.gz file. Jon On 28 September 2016 at 19:44, Howard Hinnant <howard.hinnant@gmail.com> wrote:
On Sep 28, 2016, at 2:41 PM, Alexander Belopolsky < alexander.belopolsky@gmail.com> wrote:
On Wed, Sep 28, 2016 at 2:33 PM, Howard Hinnant <
howard.hinnant@gmail.com> wrote:
The version file does not appear in the uncompressed tzdata2016g database. This is hurting my ability to report the version of the database to my clients. Can we please put a version file in the database?
Though I suppose I could scrape the version number out of the NEWS file…
Why can't you deduce the version from the name of the file? But I agree: non-git based distributions should include the generated version file.
I don’t want to keep the tarball around. I just have the expanded tzdata directory and its contents.
Howard

On Wed, Sep 28, 2016 at 3:26 PM, Jon Skeet <skeet@pobox.com> wrote:
another vote for a stable, easily-machine-readable place for the version number to be included in the data tar.gz file.
I would also ask for this file to be installable in the zoneinfo directory. On many systems it is impossible to figure out what version of tzdata they use.

I am also in favor of this. See this thread from last year where I suggested something similar: https://mm.icann.org/pipermail/tz/2015-October/022807.html On 09/28/2016 03:35 PM, Alexander Belopolsky wrote:
On Wed, Sep 28, 2016 at 3:26 PM, Jon Skeet <skeet@pobox.com <mailto:skeet@pobox.com>> wrote:
another vote for a stable, easily-machine-readable place for the version number to be included in the data tar.gz file.
I would also ask for this file to be installable in the zoneinfo directory. On many systems it is impossible to figure out what version of tzdata they use.

On 09/28/2016 12:35 PM, Alexander Belopolsky wrote:
I would also ask for this file to be installable in the zoneinfo directory. On many systems it is impossible to figure out what version of tzdata they use.
It should be installable, for those who want to install it there. There is a bit of fun if someone runs the shell command 'TZ="version" date'; this ends up being equivalent to 'TZ="GMT0" date' in the reference implementation (assuming the version number does not begin with "TZif"!). If you do that sort of thing and have local patches, it may take some effort to ensure the version info is correct. One possibility is to use Git, commit your local changes before installing, and have a reliable way to map the installed version numbers back to your own repository.

On Wed, Sep 28, 2016 at 4:12 PM, Paul Eggert <eggert@cs.ucla.edu> wrote:
On 09/28/2016 12:35 PM, Alexander Belopolsky wrote:
I would also ask for this file to be installable in the zoneinfo directory. On many systems it is impossible to figure out what version of tzdata they use.
It should be installable, for those who want to install it there.
By "installable" I mean "make install" should copy it to $TZDIR.
There is a bit of fun if someone runs the shell command 'TZ="version" date'; this ends up being equivalent to 'TZ="GMT0" date' in the reference implementation (assuming the version number does not begin with "TZif"!).
You can have the same fun with TZ=zone.tab now.

On 28 September 2016 at 16:31, Alexander Belopolsky < alexander.belopolsky@gmail.com> wrote:
On Wed, Sep 28, 2016 at 4:12 PM, Paul Eggert <eggert@cs.ucla.edu> wrote:
On 09/28/2016 12:35 PM, Alexander Belopolsky wrote:
I would also ask for this file to be installable in the zoneinfo directory. On many systems it is impossible to figure out what version of tzdata they use.
It should be installable, for those who want to install it there.
By "installable" I mean "make install" should copy it to $TZDIR.
+1, while we're already touching this process. -- Tim Parenti

Alexander Belopolsky wrote:
By "installable" I mean "make install" should copy it to $TZDIR.
BTW, maybe this is a silly question, but I don't know the answer and have never tried it: If an application is running and using e.g. glibc to get a particular time zone information: are the changes detected automatically at runtime and become effective for the application when a new version of the TZ DB is installed, or only after the application or run-time library is reloaded? Martin

On 09/29/2016 01:21 AM, Martin Burnicki wrote:
are the changes detected automatically at runtime and become effective for the application when a new version of the TZ DB is installed, or only after the application or run-time library is reloaded?
This depends on the library and on the application. Typically reloading is required, as this improves efficiency in the typical case. That's how the reference code works. The issue is not limited to updating tzdata version. It can also occur when the system changes the wall-clock time zone, e.g., by changing /etc/localtime to point to a different file. On mobile devices this can occur when you (say) cross the Trammell Bridge from Blountstown to Bristol in Florida, to go from "slow time" (CST) to "fast time" (EST). Some applications do tricks like 'setenv ("TZ", getenv ("TZ"))' to cajole the library into the heavyweight operation of reloading a changed zone file. The code is actually more complicated than that, as some libraries notice that you haven't actually changed TZ and so do nothing. My source for "slow" and "fast time": Klinkenberg J. Real Florida: time and time again. St Petersburg Times 2004-04-04. http://www.sptimes.com/2004/04/04/Floridian/Real_Florida__Time_an.shtml

On Sep 30, 2016, at 8:43 AM, Paul Eggert <eggert@cs.ucla.edu> wrote:
The issue is not limited to updating tzdata version. It can also occur when the system changes the wall-clock time zone, e.g., by changing /etc/localtime to point to a different file. On mobile devices this can occur when you (say) cross the Trammell Bridge from Blountstown to Bristol in Florida, to go from "slow time" (CST) to "fast time" (EST).
For what it's worth, Darwin (macOS, iOS, etc.) will detect at least some time zone changes and the next localtime() call will use the new zone. (And a MacBook * counts as a "mobile device" in this scenario, although it has to use Wi-Fi to find out where it's located, as MacBook *'s don't have GPS receivers built in.) I don't *think* it has any provisions for handling updates to the tzdb, but Apple doesn't currently distribute those except as part of Software Updates, as far as I know, and those require a reboot. (I seem to remember this coming up in a past discussion of the timeliness of tzdb updates and of vendors doing tzdb updates separate from OS dot-dot version updates.)

On 11/10/2016 01:20 PM, Guy Harris wrote:
For what it's worth, Darwin (macOS, iOS, etc.) will detect at least some time zone changes and the next localtime() call will use the new zone.
It strikes me that sometimes applications make several calls to localtime()/mktime() and expect the results to be consistent with each other. Emacs, for example, does this to guess time zone rules. These applications won't work if the time zone rules change spontaneously while applications are doing their computation. I don't see an easy way around this problem with the current Darwin API.

On Nov 10, 2016, at 1:56 PM, Paul Eggert <eggert@cs.ucla.edu> wrote:
On 11/10/2016 01:20 PM, Guy Harris wrote:
For what it's worth, Darwin (macOS, iOS, etc.) will detect at least some time zone changes and the next localtime() call will use the new zone.
It strikes me that sometimes applications make several calls to localtime()/mktime() and expect the results to be consistent with each other. Emacs, for example, does this to guess time zone rules. These applications won't work if the time zone rules change spontaneously while applications are doing their computation.
I don't see an easy way around this problem with the current Darwin API.
With the current *UNIX* API. I don't know what Emacs is trying to compute, but perhaps there needs to be an API that atomically gives it what it's trying to compute, to the extent that, if a change happens in the middle, either it gives correct results for the old time zone or correct results for the new time zone. If giving the results for the old time zone causes a problem, then there will have to be a way to deliver "time zone changed" notifications to applications *and Emacs will have to handle those notifications*. There are two issues: 1) applications running on UN*Xes shouldn't assume that the time zone won't change - there exists at least one UNIX(R) where it *can* change out from under you, and there may be more UN*Xes where it does or where it will on future versions; 2) applications that need more information about time zones than "what's the local time for this seconds-since-the-Epoch value" or "what's the seconds-since-the-Epoch value for this local time, if there is such a value?" may require new APIs.

On 11/10/2016 02:04 PM, Guy Harris wrote:
I don't see an easy way around this problem with the current Darwin API. With the current*UNIX* API.
The NetBSD API gives applications a way to get an immutable time zone object, so that multiple operations like localtime_rz and mktime_z can use consistent rules. This was originally designed for multithreading (so that different threads can be in different time zones) but it also has the property that time zone rules don't change once determined (which helps performance). Emacs uses this API if available, so Emacs should be reasonably immune to these races on NetBSD. NetBSD doesn't look for changes to the installed tz binary files, as Darwin does. If the NetBSD API were implemented atop Darwin, perhaps the Darwin tzalloc operation should have a flag specifying whether the caller wants the time zone object to be immutable or potentially updated after every call. Come to think of it, perhaps we should add a options arg to the reference tzalloc too (for compatibility we'd define a new function like tzalloc but with an option arg). One option could be whether the implementation should revisit the installed tz binary file every time localtime_rz or mktime_z is called (for efficiency perhaps it could use file notification on systems where that works well). Another option, while we're on the topic, could be whether to count leap seconds. I'll CC: this to Christos Zoulas (the NetBSD time guru) to see whether he thinks this sort of thing would be a good idea.
I don't know what Emacs is trying to compute, but perhaps there needs to be an API that atomically gives it what it's trying to compute
Emacs is inferring the daylight-saving rules by making multiple calls to localtime. Essentially, Emacs wants a consistent and complete snapshot of the current rule set for all times from now into the indefinite future. Unfortunately, the API to support this sort of thing would be complicated.

On Nov 10, 2:44pm, eggert@cs.ucla.edu (Paul Eggert) wrote: -- Subject: Re: NetBSD vs Darwin timezone API (was: tzdata2016g missing versi | Come to think of it, perhaps we should add a options arg to the | reference tzalloc too (for compatibility we'd define a new function like | tzalloc but with an option arg). One option could be whether the | implementation should revisit the installed tz binary file every time | localtime_rz or mktime_z is called (for efficiency perhaps it could use | file notification on systems where that works well). Another option, | while we're on the topic, could be whether to count leap seconds. I'll | CC: this to Christos Zoulas (the NetBSD time guru) to see whether he | thinks this sort of thing would be a good idea. I think it is a good idea to have a tzalloc that auto-refreshes the zone file if it has changed. It should fairly straight-forward to implement. I propose to make this the default behavior (tzalloc returns a zone that will auto-update) since most programs would want that and provide a tzalloc_{frozen,noupdate,stable}() (or whatever we decide to call it) for those who want the data to stay the same since the open. If you like this and decide on the name, I can provide a sample implementation. Best, christos

On 11/10/2016 02:56 PM, Christos Zoulas wrote:
If you like this and decide on the name, I can provide a sample implementation.
Yes, that sounds good. I suggest the a more-generic name like 'tzallocate', which would work better if later we add more flags such as leap-second handling. E.g., 'timezone_t tzallocate(char const *name, int flags);'. tzalloc(NAME) could be equivalent to tzallocate(NAME, 0).

christos@zoulas.com (Christos Zoulas) writes:
I think it is a good idea to have a tzalloc that auto-refreshes the zone file if it has changed. It should fairly straight-forward to implement. I propose to make this the default behavior (tzalloc returns a zone that will auto-update) since most programs would want that
FWIW, I beg to differ on that. I know this will break Postgres, which is doing pretty much the same thing as Emacs, ie relying on a lot of calls to localtime() to infer the system's active timezone. We only do that once during database initialization, so we're not badly exposed, but nonetheless this is another data point suggesting that programs in the field do have this assumption. I think that for backwards compatibility's sake, if nothing else, the default behavior should be no auto-update. I concur with Paul's suggestion to make it be controlled by a bit in a flags word, whichever way the default goes. Also ... what will you do for localtime() wherein there's no explicit initialization call whereby callers could say which behavior they want? regards, tom lane

On Nov 10, 2016, at 3:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
FWIW, I beg to differ on that. I know this will break Postgres, which is doing pretty much the same thing as Emacs, ie relying on a lot of calls to localtime() to infer the system's active timezone. We only do that once during database initialization, so we're not badly exposed, but nonetheless this is another data point suggesting that programs in the field do have this assumption.
So I shouldn't run Postgres on any Mac that I can conveniently carry while it's running, because if I carry the Mac into a car, train, airplane, or boat, the system's active time zone might change out from under Postgres. :-) Why does Postgres need to know the system's active time zone?

On Nov 10, 6:17pm, tgl@sss.pgh.pa.us (Tom Lane) wrote: -- Subject: Re: [tz] NetBSD vs Darwin timezone API (was: tzdata2016g missing | christos@zoulas.com (Christos Zoulas) writes: | > I think it is a good idea to have a tzalloc that auto-refreshes the zone | > file if it has changed. It should fairly straight-forward to implement. I | > propose to make this the default behavior (tzalloc returns a zone that will | > auto-update) since most programs would want that | | FWIW, I beg to differ on that. I know this will break Postgres, which | is doing pretty much the same thing as Emacs, ie relying on a lot of calls | to localtime() to infer the system's active timezone. We only do that | once during database initialization, so we're not badly exposed, but | nonetheless this is another data point suggesting that programs in the | field do have this assumption. | | I think that for backwards compatibility's sake, if nothing else, the | default behavior should be no auto-update. This affects only the _z calls. So if you are using tzalloc/*_z API, this is only when you are going to see this difference. So these very few programs, can adjust and these are the people who live dangerously at the forefront of the API; in most cases they will welcome not to have to change code :-) | I concur with Paul's suggestion to make it be controlled by a bit in a | flags word, whichever way the default goes. Yes, that is fine with me. | Also ... what will you do for localtime() wherein there's no explicit | initialization call whereby callers could say which behavior they want? Nothing happens to that. If you use localtime() you don't get autoupdates. We could add a new call tzsetflags(int flags); to turn this on for the statically allocated global timezone... I am not sure if this is a good idea or not. I guess Darwin is doing it? christos timezones,

On Nov 10, 2016, at 4:16 PM, Christos Zoulas <christos@zoulas.com> wrote:
On Nov 10, 6:17pm, tgl@sss.pgh.pa.us (Tom Lane) wrote: -- Subject: Re: [tz] NetBSD vs Darwin timezone API (was: tzdata2016g missing
| Also ... what will you do for localtime() wherein there's no explicit | initialization call whereby callers could say which behavior they want?
Nothing happens to that. If you use localtime() you don't get autoupdates. We could add a new call tzsetflags(int flags); to turn this on for the statically allocated global timezone... I am not sure if this is a good idea or not. I guess Darwin is doing it?
Yes. If you compile and run this program: #include <stdio.h> #include <time.h> #include <unistd.h> int main(void) { time_t now; for (;;) { now = time(NULL); printf("%s", ctime(&now)); sleep(5); } return 0; } and then go into the "Time Zone" pane of the "Date & Time" page of System Preferences and change your system's time zone, and wait for the next time printout, the next printout will reflect the local time in *that* zone, *not* the zone that was in effect when the program was started - and the same will happen for the next change, etc.. (I just tested this a few minutes ago.) It'll probably happen if you stick with "Set time zone automatically using current location", and take the machine across a tzdb zone boundary where the current offset from UTC changes; I'm too far from such a boundary to test it right now. :-)

This bounced with "Diagnostic-Code: SMTP; 551 5.7.1 Rejected due to SPF mismatch" when Tom was CCed, perhaps because my From: address's domain name isn't the same as my mail server's domain name, or because they don't like sonic.net, or something, so I'm resending this to the list, *without* CCing Tom, in the hopes that it'll get delivered to him via the list, in a fashion more acceptable to the machine in question. On Nov 10, 2016, at 3:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
FWIW, I beg to differ on that. I know this will break Postgres, which is doing pretty much the same thing as Emacs, ie relying on a lot of calls to localtime() to infer the system's active timezone. We only do that once during database initialization, so we're not badly exposed, but nonetheless this is another data point suggesting that programs in the field do have this assumption.
So I shouldn't run Postgres on any Mac that I can conveniently carry while it's running, because if I carry the Mac into a car, train, airplane, or boat, the system's active time zone might change out from under Postgres. :-) Why does Postgres need to know the system's active time zone?

Guy Harris <guy@alum.mit.edu> writes:
On Nov 10, 2016, at 3:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
FWIW, I beg to differ on that. I know this will break Postgres, which is doing pretty much the same thing as Emacs, ie relying on a lot of calls to localtime() to infer the system's active timezone. We only do that once during database initialization, so we're not badly exposed, but nonetheless this is another data point suggesting that programs in the field do have this assumption.
Why does Postgres need to know the system's active time zone?
To select a reasonable default for the "timezone" setting. As I said, it only happens once during initialization, and it's probably not very likely that you'd be running initdb while passing through a zone boundary. But it is an illustration that Emacs isn't the only program out there that expects consistent results across multiple localtime calls.
This bounced with "Diagnostic-Code: SMTP; 551 5.7.1 Rejected due to SPF mismatch" when Tom was CCed, perhaps because my From: address's domain name isn't the same as my mail server's domain name, or
Sorry about that ... experimental spam filtering. But you should think twice about sending email claiming to be from an MIT address out of servers that are not MIT's. It's a good way to get blocked, and to get your mail provider's servers blocked too. regards, tom lane

On Nov 10, 2016, at 7:52 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Guy Harris <guy@alum.mit.edu> writes:
On Nov 10, 2016, at 3:17 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
FWIW, I beg to differ on that. I know this will break Postgres, which is doing pretty much the same thing as Emacs, ie relying on a lot of calls to localtime() to infer the system's active timezone. We only do that once during database initialization, so we're not badly exposed, but nonetheless this is another data point suggesting that programs in the field do have this assumption.
Why does Postgres need to know the system's active time zone?
To select a reasonable default for the "timezone" setting.
OK, it says here: https://www.postgresql.org/docs/current/static/datatype-datetime.html#DATATY... "PostgreSQL allows you to specify time zones in three different forms: * A full time zone name, for example America/New_York. The recognized time zone names are listed in the pg_timezone_names view (see Section 50.80). PostgreSQL uses the widely-used IANA time zone data for this purpose, so the same time zone names are also recognized by much other software. * A time zone abbreviation, for example PST. Such a specification merely defines a particular offset from UTC, in contrast to full time zone names which can imply a set of daylight savings transition-date rules as well. The recognized abbreviations are listed in the pg_timezone_abbrevs view (see Section 50.79). You cannot set the configuration parameters TimeZone or log_timezone to a time zone abbreviation, but you can use abbreviations in date/time input values and with the AT TIME ZONE operator. * In addition to the timezone names and abbreviations, PostgreSQL will accept POSIX-style time zone specifications of the form STDoffset or STDoffsetDST, where STD is a zone abbreviation, offset is a numeric offset in hours west from UTC, and DST is an optional daylight-savings zone abbreviation, assumed to stand for one hour ahead of the given offset. For example, if EST5EDT were not already a recognized zone name, it would be accepted and would be functionally equivalent to United States East Coast time. In this syntax, a zone abbreviation can be a string of letters, or an arbitrary string surrounded by angle brackets (<>). When a daylight-savings zone abbreviation is present, it is assumed to be used according to the same daylight-savings transition rules used in the IANA time zone database's posixrules entry. In a standard PostgreSQLinstallation, posixrules is the same as US/Eastern, so that POSIX-style time zone specifications follow USA daylight-savings rules. If needed, you can adjust this behavior by replacing the posixrules file." If that's the time zone setting you're referring to, the first and third of those sounds like standard TZ settings. So what are the localtime() calls doing? Is this an attempt to guess, if the TZ environment variable isn't set, to guess which of the tzdb zones are in effect on the system? Or is it an attempt to determine the tzdb zone on systems that *don't* use the tzdb for UTC-to-local-time mappings (Windows, HP-UX?, etc.)? Or is it an attempt to deal with the "time zone abbreviation" case? Note also that, as I indicated, on a Mac, the system time zone setting is not guaranteed to remain the same during the lifetime of a process. I don't know what *other* systems that have a "where in the world am I" OS service do in that regard, but at least some of them might support the same thing Darwin does.
This bounced with "Diagnostic-Code: SMTP; 551 5.7.1 Rejected due to SPF mismatch" when Tom was CCed, perhaps because my From: address's domain name isn't the same as my mail server's domain name, or
Sorry about that ... experimental spam filtering.
Still bouncing.
But you should think twice about sending email claiming to be from an MIT address out of servers that are not MIT's. It's a good way to get blocked, and to get your mail provider's servers blocked too.
What alternative to the MIT Alumni Association's email forwarding service would you suggest to allow me to use an email address independent of my email current service provider (and that would allow me to, if necessary, forward email to that address to more than one other email address, which I've done in the past)? I'm not doing that just for the lulz.

Guy Harris <guy@alum.mit.edu> writes:
On Nov 10, 2016, at 7:52 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Guy Harris <guy@alum.mit.edu> writes:
Why does Postgres need to know the system's active time zone?
To select a reasonable default for the "timezone" setting.
So what are the localtime() calls doing? Is this an attempt to guess, if the TZ environment variable isn't set, to guess which of the tzdb zones are in effect on the system? Or is it an attempt to determine the tzdb zone on systems that *don't* use the tzdb for UTC-to-local-time mappings (Windows, HP-UX?, etc.)? Or is it an attempt to deal with the "time zone abbreviation" case?
First and second of those. We use the tz database internally, and want to pick a zone that best matches what we see the system doing, without assuming that the system is using tz (or maybe it's using a different version of the tz database than we have). The matching algorithm underwent quite some rejiggering when it was developed back in 2004, but we've heard few complaints since then: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/initdb/...
But you should think twice about sending email claiming to be from an MIT address out of servers that are not MIT's. It's a good way to get blocked, and to get your mail provider's servers blocked too.
What alternative to the MIT Alumni Association's email forwarding service would you suggest to allow me to use an email address independent of my email current service provider (and that would allow me to, if necessary, forward email to that address to more than one other email address, which I've done in the past)? I'm not doing that just for the lulz.
If their intention is to let their subscribers send mail from anywhere at all, my suggestion is that they not advertise an SPF record. regards, tom lane

On Thu, Nov 10, 2016, at 23:23, Guy Harris wrote:
What alternative to the MIT Alumni Association's email forwarding service would you suggest to allow me to use an email address independent of my email current service provider (and that would allow me to, if necessary, forward email to that address to more than one other email address, which I've done in the past)? I'm not doing that just for the lulz.
is it possible for you to change your "envelope from" (the value sent in SMTP "MAIL FROM" commands) to a different value, reflecting the domain you're actually using as a mail server, while leaving your "From:" header alone? As I understand it, that's how these things are 'meant to be done' anyway. I'd *hope* that this would resolve the issue (it's SPF, not FPF, after all) but there's only one way to find out.

On Thu, Nov 10, 2016, at 23:23, Guy Harris wrote:
What alternative to the MIT Alumni Association's email forwarding service would you suggest to allow me to use an email address independent of my email current service provider (and that would allow me to, if necessary, forward email to that address to more than one other email address, which I've done in the past)? I'm not doing that just for the lulz.
On further reading, MIT recommends using their SMTP server to send email from their domain. Is there any particular reason that's objectionable? With the use of SPF, this seems aimed at preventing anyone from sending email from you without having your username and password. https://alum.mit.edu/help/EmailForwardingFAQ#a11
How can I send an email from my alum.mit.edu address? You can use the online form.
You can also configure your email client (e.g. Outlook or Thunderbird) to use an SMTP (or outgoing) server that's available exclusively for MIT alumni. To do so, you will need to open your email client and specify outgoing-alum.mit.edu as your SMTP (or outgoing) mail server. Typically the SMTP server information can be found in your email client under menu items called Options, Preferences, or Tools. You should use port 465 (SSL) or 587 (TLS). Please note that your Infinite Connection username and password are required for SMTP server authentication. You will also need to configure your email client so that your alum.mit.edu address is specified as the From address.
Fixing the envelope from to match your server as I suggested before may still work - messages delivered through the list work fine and they still have "header.from=alum.mit.edu", but it validates the "mtp.mailfrom=tz-bounces@iana.org" instead. It does seem to be checking something called "has-list-id=yes" though, so it's unclear.

On 11/10/2016 03:17 PM, Tom Lane wrote:
I think that for backwards compatibility's sake, if nothing else, the default behavior should be no auto-update.
As Christos says, we're talking only about localtime_rz and mktime_z. Emacs uses these on NetBSD, and typically will want autoupdate (because it wants to show the current time in the mode line, for example) but sometimes will not (because it wants to make several related requests to infer time zone rules). The Emacs Lisp API currently does not provide a way to request or to suppress autoupdate, so that will have to be added to the Elisp API regardless of what the tzallocate default is and so this does not give us much guidance on what the tzallocate default should be. localtime and localtime_r could go either way; that is, they can invoke tzallocate and specify the autoupdate flag, regardless of the default. There are reasonable arguments in both directions. Darwin localtime and localtime_r do autoupdate, and grab a lock which is a performance bottleneck in multithreaded applications. NetBSD localtime and localtime_r do not do autoupdate, which fails to track changes. All in all I'm now inclined to say that the default for tzallocate autoupdate should be off, as that's the traditional behavior, but this is merely a mild preference.

On Nov 10, 2016, at 2:44 PM, Paul Eggert <eggert@cs.ucla.edu> wrote:
On 11/10/2016 02:04 PM, Guy Harris wrote:
I don't see an easy way around this problem with the current Darwin API. With the current*UNIX* API.
The NetBSD API gives applications a way to get an immutable time zone object, so that multiple operations like localtime_rz and mktime_z can use consistent rules. This was originally designed for multithreading (so that different threads can be in different time zones) but it also has the property that time zone rules don't change once determined (which helps performance). Emacs uses this API if available, so Emacs should be reasonably immune to these races on NetBSD.
NetBSD doesn't look for changes to the installed tz binary files, as Darwin does.
As I remember from the discussions I mentioned, Darwin doesn't do so, either. What it *does* have are notifications (sent, I think, using the "notification" mechanism - "man 3 notify" on macOS) sent out when the *current time zone* changes (either manually from System Preferences or automatically from a location change).
If the NetBSD API were implemented atop Darwin, perhaps the Darwin tzalloc operation should have a flag specifying whether the caller wants the time zone object to be immutable or potentially updated after every call.
There are two issues here: 1) is a timezone object immutable across changes to *the time zone data files*? 2) is a timezone object obtained from tzalloc(NULL) immutable across changes to *which tzdb zone is the current zone*? The only place where the current Darwin behavior is an issue is, as far as I know, 2), as I don't think changes to the time zone files provoke a reload of the file. The tzalloc() man page says A NULL pointer may be passed to tzalloc() instead of a timezone name, to refer to the current system timezone. The question is whether "current timezone" means "current as of the time it's called" or "special time zone object that changes when the current time zone changes". I'm guessing that it means the former on NetBSD. For systems that support the time zone changing out from under applications, I can see two ways to handle current time zone changes for tzalloc() (there may be more ways): 1) the timezone object returned by tzalloc(NULL) is mutable if the current time zone changes, and the object returned by tzalloc_flags(NULL, TZA_MUTABLE) is mutable if the tzdata changes or the current time zone changes; 2) the timezone object returned by tzalloc(NULL) is immutable, the object returned by tzalloc_flags(NULL, TZA_MUTABLE) is mutable only if the tzdata changes, *and* we have an API that delivers "the current time zone changed" events (the reference implementation would never deliver an event, but implementations based on the reference implementation, and independent implementations, could). A third possibility is to have the objects *always* be immutable, and deliver both "current time zone changed" and "tzdata was updated" events. The behavior of localtime() for a current time zone change probably should *not* be specified - the only specification that won't result in Apple ignoring the specification is "localtime() always tracks the current time zone, even if it changes", and there are probably other systems that would reject *requiring* it to track the current time zone (as they'd have to implement something to detect those changes in arbitrary programs using localtime() etc., even something as simple as #include <stdio.h> #include <time.h> #include <unistd.h> int main(void) { time_t now; for (;;) { now = time(NULL); printf("%s", ctime(&now)); sleep(5); } return 0; } which *does*, in fact, track time zone changes on macOS - I just tested it on my macOS Sierra machine with manual zone changes).
I don't know what Emacs is trying to compute, but perhaps there needs to be an API that atomically gives it what it's trying to compute
Emacs is inferring the daylight-saving rules by making multiple calls to localtime. Essentially, Emacs wants a consistent and complete snapshot of the current rule set for all times from now into the indefinite future.
To what use does it put that snapshot? (And is it assuming it's running on an immobile machine?)

On 11/10/2016 05:12 PM, Guy Harris wrote:
Emacs wants a consistent and complete snapshot of the current rule set for all times from now into the indefinite future. To what use does it put that snapshot?
It's the calendar code contributed by E.M. Reingold. As I recall, Emacs can tell you what day and time DST is scheduled to begin in the year 2039, even on a platform with signed 32-bit time_t so that localtime/mktime do not work that far into the future. The calendar code uses Y-M-D arithmetic to work around the Y2038 problem. (Lots of hairy stuff there, written back when you couldn't reasonably tell people to get 64-bit machines but people still wanted to calculate the date of Easter 1066 in the Julian calendar, that sort of thing.)

On 11/11/16 01:12, Guy Harris wrote:
1) is a timezone object immutable across changes to *the time zone data files*?
Ignoring the full update to the tzdata set for a moment. This was one of the KEY elements for tzdist. A call to tzdist would flag that there was a change to an offset it MAY affect the time object one is looking at. The whole point of that discussion was that one potentially has no idea if the time actually needs to change. If the event being calendared is at a fixed UTC time ... a video link slot to another timezone ... then the time is 'immutable' ( why not just say fixed! ) so one has to change the local time, but if the local time is fixed, the other calenders need updating instead. Original discussions on tzdist were based on the assumption that tzdist simply provided the CURRENT offsets, but one HAS to keep the tz version as a key element of any time object. This is where genealogical data that was normalized in the past has become corrupt simply because the tzdata set used to normalize it was not recorded. One needs more than just the 'current' tzdata, one needs a means of establishing that ... having got a set of data which was encoded using one tz version, one can if necessary update it to another for comparison. But this STILL fails if the event is fixed to local time rather than UTC time such as flagged by astronomical events that can make a DST change dependent on real observations. Basically we are stuffed on some events what ever we do ... -- Lester Caine - G8HFL ----------------------------- Contact - http://lsces.co.uk/wiki/?page=contact L.S.Caine Electronic Services - http://lsces.co.uk EnquirySolve - http://enquirysolve.com/ Model Engineers Digital Workshop - http://medw.co.uk Rainbow Digital Media - http://rainbowdigitalmedia.co.uk

On Nov 10, 2016, at 2:44 PM, Paul Eggert <eggert@cs.ucla.edu> wrote:
On 11/10/2016 02:04 PM, Guy Harris wrote:
I don't see an easy way around this problem with the current Darwin API. With the current*UNIX* API.
The NetBSD API gives applications a way to get an immutable time zone object, so that multiple operations like localtime_rz and mktime_z can use consistent rules. This was originally designed for multithreading (so that different threads can be in different time zones) but it also has the property that time zone rules don't change once determined (which helps performance). Emacs uses this API if available, so Emacs should be reasonably immune to these races on NetBSD.
Speaking of the NetBSD API, neither tzgetname() nor tzgetgmtoff() fully support the tzdb; they both appear to assume that, for any tzdb zone, there are such things as *the* abbreviation for standard or summer time and *the* offset from GMT for standard or summer time. That's not the case for all tzdb zones - with the addition of all the Local Mean Time entries, is it true for *any* tzdb zones any more? - so if we were to provide similar APIs, they'd have to take something such as a time_t as an argument, rather than a "standard or summer time?" flag.

Guy Harris wrote:
neither tzgetname() nor tzgetgmtoff() fully support the tzdb ... if we were to provide similar APIs, they'd have to take something such as a time_t as an argument
I don't see the need, as one can call localtime and get the time zone abbreviation from tm_zone. This works on NetBSD, which defines tm_zone, so those two functions are unnecessary there. Any platform lacking tm_zone should add it instead of adding those two functions.

On Nov 11, 2016, at 3:22 PM, Paul Eggert <eggert@cs.ucla.edu> wrote:
Guy Harris wrote:
neither tzgetname() nor tzgetgmtoff() fully support the tzdb ... if we were to provide similar APIs, they'd have to take something such as a time_t as an argument
I don't see the need, as one can call localtime and get the time zone abbreviation from tm_zone. This works on NetBSD, which defines tm_zone, so those two functions are unnecessary there. Any platform lacking tm_zone should add it instead of adding those two functions.
So, if we were to provide tzalloc() and tzfree(), we wouldn't provide tzgetname() or tzgetgmtoff()? So, in past discussions, what conclusion did we come to, if any, about the lifetime of memory pointed to by tm_zone?

Guy Harris wrote:
what conclusion did we come to, if any, about the lifetime of memory pointed to by tm_zone?
In the reference implementation and in NetBSD, the lifetime is until the corresponding timezone_t object is passed to tzfree. Come to think of it, that argues against spontaneous changes to timezone_t objects unless the changes never obsolete any existing abbreviations. I don't know about Darwin. I hope the tm_zone lifetime is not erratic (i.e., until the next time the system randomly decides to change time zones); that would be bad.

Paul wrote:
On 09/28/2016 12:35 PM, Alexander Belopolsky wrote:
I would also ask for [the version] file to be installable in the zoneinfo directory...
It should be installable, for those who want to install it there. There is a bit of fun if someone runs the shell command 'TZ="version" date';
Maybe install it in the zoneinfo dir as ".version"?

On 09/28/2016 02:30 PM, Steve Summit wrote:
Maybe install it in the zoneinfo dir as ".version"?
Perhaps we should put some syntax into the installed file, so that no matter what the version is, the file can't be misinterpreted as a tz binary file. For example, the file could be a line of the form "version='V'" rather than just "V", and we could call the file "version.sh" so that people expect it to use shell script syntax. This would give us more freedom to extend the file's format later. Calling the file "version.sh" also makes it clearer that it's not a normal data file, whose names do not contain ".". It might be better to do it that way, than to use a leading "." and make the file hidden.

On 2016-09-28 19:30, Paul Eggert wrote:
On 09/28/2016 02:30 PM, Steve Summit wrote:
Maybe install it in the zoneinfo dir as ".version"? Perhaps we should put some syntax into the installed file, so that no matter what the version is, the file can't be misinterpreted as a tz binary file. For example, the file could be a line of the form "version='V'" rather than just "V", and we could call the file "version.sh" so that people expect it to use shell script syntax. This would give us more freedom to extend the file's format later. Calling the file "version.sh" also makes it clearer that it's not a normal data file, whose names do not contain ".". It might be better to do it that way, than to use a leading "." and make the file hidden.
How about just keeping it pure data and calling it version.txt, version.list, version.version, or $PKG.version? If right directories are generated, please install leapseconds.list, either at the root or under right, so we can check if right data needs regenerated when leapseconds are updated: currently Debian installs it, Centos does not, others? Also if backzone is used, install that too as a flag that it was used. To make life easier for distribution packagers, and admins of systems using distros, which is probably most (rather than assuming individuals installing under the generic TOPDIR=/usr/local, which now should probably be /usr/opt in most cases, and is fine for code) please consider defaulting installation to the standard "TOP" dirs: /etc, /usr/sbin, /usr/share/..., etc. (zic is normally installed in /usr/sbin), and take those variations into consideration in definitions and installation steps. Please also consider adding a DOCDIR=/usr/share/doc/tzdata, define the actual docs separate from MANS and COMMON, which includes Makefile (and should include version), and install the docs, MANTXTS, and HTML in DOCDIR. Some distros install some docs and some web pages, and some install none (e.g. Debian) with the data. Please also consider including your ChangeLog (from .gitignore), or generating it with git log --decorate=full (currently ~1MB: could limit it to [y-1]a..), and adding it to docs for installation: this makes it easier to see what files changed. -- Take care. Thanks, Brian Inglis, Calgary, Alberta, Canada

On Thu, 29 Sep 2016, Brian Inglis wrote:
On 2016-09-28 19:30, Paul Eggert wrote:
On 09/28/2016 02:30 PM, Steve Summit wrote:
Maybe install it in the zoneinfo dir as ".version"? Perhaps we should put some syntax into the installed file, so that no matter what the version is, the file can't be misinterpreted as a tz binary file. For example, the file could be a line of the form "version='V'" rather than just "V", and we could call the file "version.sh" so that people expect it to use shell script syntax. This would give us more freedom to extend the file's format later. Calling the file "version.sh" also makes it clearer that it's not a normal data file, whose names do not contain ".". It might be better to do it that way, than to use a leading "." and make the file hidden.
How about just keeping it pure data and calling it version.txt, version.list, version.version, or $PKG.version?
Or, if we anticipate additional metadata in the future, we could include a metadata.txt file similar to <version>2016g</version> +------------------+--------------------------+------------------------+ | Paul Goyette | PGP Key fingerprint: | E-mail addresses: | | (Retired) | FA29 0E3B 35AF E8AE 6651 | paul at whooppee.com | | Kernel Developer | 0786 F758 55DE 53BA 7731 | pgoyette at netbsd.org | +------------------+--------------------------+------------------------+

On Thu, Sep 29, 2016, at 17:36, Paul Goyette wrote:
Or, if we anticipate additional metadata in the future, we could include a metadata.txt file similar to
<version>2016g</version>
If it's going to look like XML, it should actually be well-formed XML. Your example is, technically, but if the 'additional metadata' is envisioned to go outside the version element it won't be.

On 09/29/2016 02:15 PM, Brian Inglis wrote:
How about just keeping it pure data and calling it version.txt, version.list, version.version, or $PKG.version?
There are two disadvantages of that. First, the version string might happen to look like the start of a tz binary file (admittedly unlikely). Second and more important, this is a metadata file and there are likely to be future extensions (e.g., to specify the range of supported time stamps), so an extensible format is called for.
If right directories are generated, please install leapseconds.list, either at the root or under right, so we can check if right data needs regenerated when leapseconds are updated: currently Debian installs it, Centos does not, others?
Where does Debian install it?
Also if backzone is used, install that too as a flag that it was used.
We don't install other data source files (e.g., 'europe'). Perhaps there should be an option to install sources, though this is a bit unusual for software packages. If so, the option should install all the sources used.
To make life easier for distribution packagers, and admins of systems using distros, which is probably most (rather than assuming individuals installing under the generic TOPDIR=/usr/local, which now should probably be /usr/opt in most cases, and is fine for code) please consider defaulting installation to the standard "TOP" dirs: /etc, /usr/sbin, /usr/share/..., etc. (zic is normally installed in /usr/sbin), and take those variations into consideration in definitions and installation steps.
Is this standard written down anywhere?
Please also consider adding a DOCDIR=/usr/share/doc/tzdata, define the actual docs separate from MANS and COMMON, which includes Makefile (and should include version), and install the docs, MANTXTS, and HTML in DOCDIR. Some distros install some docs and some web pages, and some install none (e.g. Debian) with the data.
I suppose something like this could be done. It'd help to have a survey all the places where this stuff is installed in various operating systems.
Please also consider including your ChangeLog (from .gitignore), or generating it with git log --decorate=full (currently ~1MB: could limit it to [y-1]a..), and adding it to docs for installation: this makes it easier to see what files changed.
This I'm not so sure about. My personal ChangeLog file is merely a staging area for text intended to go into commit messages, and it's not intended to be distributed. And I'm not sure it's anyway a good idea to install commit messages into the runtime environment. Software archaeologists who want commit history can easily get it from GitHub or wherever.

On Sep 29, 2016, at 5:47 PM, Paul Eggert <eggert@cs.ucla.edu> wrote:
There are two disadvantages of that. First, the version string might happen to look like the start of a tz binary file (admittedly unlikely). Second and more important, this is a metadata file and there are likely to be future extensions (e.g., to specify the range of supported time stamps), so an extensible format is called for.
Maybe zic could put the version string in every zoneinfo file. There is plenty of reserved space in the current format to hold 5-6 characters.

On 2016-09-29 15:47, Paul Eggert wrote:
On 09/29/2016 02:15 PM, Brian Inglis wrote:
If right directories are generated, please install leapseconds.list, either at the root or under right, so we can check if right data needs regenerated when leapseconds are updated: currently Debian installs it, Centos does not, others? Where does Debian install it?
Debian installs: /usr/share/zoneinfo/iso3166.tab /usr/share/zoneinfo/posixrules /usr/share/zoneinfo/leapseconds.list /usr/share/zoneinfo/zone.tab probably to support tzselect, and document leapseconds used for right, as suggested.
Also if backzone is used, install that too as a flag that it was used. We don't install other data source files (e.g., 'europe'). Perhaps there should be an option to install sources, though this is a bit unusual for software packages. If so, the option should install all the sources used.
Those other data source files are good reading, but not required unless rebuilding. Generally build sources are just downloaded by the package manager to the current directory, as that is assumed to be where you will use them.
To make life easier for distribution packagers, and admins of systems using distros, which is probably most (rather than assuming individuals installing under the generic TOPDIR=/usr/local, which now should probably be /usr/opt in most cases, and is fine for code) please consider defaulting installation to the standard "TOP" dirs: /etc, /usr/sbin, /usr/share/..., etc. (zic is normally installed in /usr/sbin), and take those variations into consideration in definitions and installation steps. Is this standard written down anywhere?
The main one is FHS Filesystem Hierarchy Standard http://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html which started as Linux FSSTND then was revamped with BSD cooperation. The /run filesystem was the most significant recent addition from this group, making /var/run and /var/lock symlinks (in)to /run for legacy code. Look at systems where you have access to check details.
Please also consider adding a DOCDIR=/usr/share/doc/tzdata, define the actual docs separate from MANS and COMMON, which includes Makefile (and should include version), and install the docs, MANTXTS, and HTML in DOCDIR. Some distros install some docs and some web pages, and some install none (e.g. Debian) with the data. I suppose something like this could be done. It'd help to have a survey all the places where this stuff is installed in various operating systems.
The above document prevails on most systems, although there could be minor variations in certain items on some distributions: /etc, /usr/bin, /usr/sbin, /usr/share/man, /usr/share/zoneinfo{,/posix,/right}, and /usr/share/doc/tzdata seem to be unvarying. I don't know about BSD variants except Solaris, which still straddles the SysV/BSD divide, and OSF variants IBM AIX and HP UX, which have their own legacies, requiring some use of find ;^>
Please also consider including your ChangeLog (from .gitignore), or generating it with git log --decorate=full (currently ~1MB: could limit it to [y-1]a..), and adding it to docs for installation: this makes it easier to see what files changed. This I'm not so sure about. My personal ChangeLog file is merely a staging area for text intended to go into commit messages, and it's not intended to be distributed. And I'm not sure it's anyway a good idea to install commit messages into the runtime environment. Software archaeologists who want commit history can easily get it from GitHub or wherever.
As suggested, ChangeLogs provide more useful detail for packagers, admins, and those others on the list working with releases, of potential impacts by changes on their distributions and production operations. IANA, Oracle, and PHP were impacted by the changes in the latest release, and those details were clearer in the ChangeLog, but somewhat buried in the NEWS. If they are working from the IANA release, they don't have access to the ChangeLog, and should not have to clone the repo, or browse the web site, to generate it, when that could be done during packaging, as suggested. Most packages nowadays come with detailed ChangeLog history, for impact analysis. At the least, changes since the last release should be detailed, and installed into the docs directory, not the "runtime" environment. -- Take care. Thanks, Brian Inglis, Calgary, Alberta, Canada

On Sep 29, 2016, at 5:47 PM, Paul Eggert <eggert@CS.UCLA.EDU> wrote:
On 09/29/2016 02:15 PM, Brian Inglis wrote:
...
To make life easier for distribution packagers, and admins of systems using distros, which is probably most (rather than assuming individuals installing under the generic TOPDIR=/usr/local, which now should probably be /usr/opt in most cases, and is fine for code) please consider defaulting installation to the standard "TOP" dirs: /etc, /usr/sbin, /usr/share/..., etc. (zic is normally installed in /usr/sbin), and take those variations into consideration in definitions and installation steps.
Is this standard written down anywhere?
I don't understand the suggestion. /usr/local is the standard destination in GNU kits. The reasoning is that people install directly from those kits only when they aren't using the packager's package files. And if so, the whole idea is to make sure such user installs don't conflict with the packager's bits. Packagers know this and know how to override the default destination when they build their packages. And it doesn't make sense for TZ to attempt to do this, because different packagers have different naming tree conventions. This is one of the main things that makes Linux a mess: half of the Linuxes out there put stuff in /usr, and half put it in /opt, for no reason that I know of. paul

On 09/28/2016 11:28 AM, Howard Hinnant wrote:
The version file does not appear in the uncompressed tzdata2016g database.
It is in the tzdb-2016g.tar.lz and tzcode2016g.tar.gz tarballs, so you can grab it from there for now. I omitted it from the data tarball because it is not needed for data. This was done as part of the patch for more-accurate release numbers circulated on Sept. 5 <http://mm.icann.org/pipermail/tz/2016-September/024054.html>. Now that you mention it, it's a good idea to put the version file into the data tarball too, as a bit of metadata info. I installed the attached patch to do that, so that in future releases the version file should be in all three tarballs. This sort of problem may help to explain why I prefer the one-tarball format. With one tarball you don't need to worry about which files go into which tarballs, and that lessens the number of distribution glitches.

On Wed, 28 Sep 2016, Paul Eggert wrote:
On 09/28/2016 11:28 AM, Howard Hinnant wrote:
The version file does not appear in the uncompressed tzdata2016g database.
It is in the tzdb-2016g.tar.lz and tzcode2016g.tar.gz tarballs, so you can grab it from there for now. I omitted it from the data tarball because it is not needed for data. This was done as part of the patch for more-accurate release numbers circulated on Sept. 5 <http://mm.icann.org/pipermail/tz/2016-September/024054.html>.
I think it makes much more sense to put it in the data file, then in the code file. The data file (or its contents) is what is being distributed, and hence the versioning of that makes IMO more sense.
Now that you mention it, it's a good idea to put the version file into the data tarball too, as a bit of metadata info. I installed the attached patch to do that, so that in future releases the version file should be in all three tarballs.
+1
This sort of problem may help to explain why I prefer the one-tarball format. With one tarball you don't need to worry about which files go into which tarballs, and that lessens the number of distribution glitches.
I don't mind a one tarball format either, as long as it is in a reasonable format (gz, not lz). cheers, Derick -- https://derickrethans.nl | https://xdebug.org | https://dram.io Like Xdebug? Consider a donation: https://xdebug.org/donate.php twitter: @derickr and @xdebug

On 28/09/16 23:24, Derick Rethans wrote:
On Wed, 28 Sep 2016, Paul Eggert wrote:
On 09/28/2016 11:28 AM, Howard Hinnant wrote:
The version file does not appear in the uncompressed tzdata2016g database.
It is in the tzdb-2016g.tar.lz and tzcode2016g.tar.gz tarballs, so you can grab it from there for now. I omitted it from the data tarball because it is not needed for data. This was done as part of the patch for more-accurate release numbers circulated on Sept. 5 <http://mm.icann.org/pipermail/tz/2016-September/024054.html>.
I think it makes much more sense to put it in the data file, then in the code file. The data file (or its contents) is what is being distributed, and hence the versioning of that makes IMO more sense.
It looks like the patch adds the version to both the tzcode and tzdata tarballs, although the NEWS item only mentions tzcode, and the patch description only mentions tzdata! -- -=( Ian Abbott @ MEV Ltd. E-mail: <abbotti@mev.co.uk> )=- -=( Web: http://www.mev.co.uk/ )=-

On 29/09/16 10:17, Ian Abbott wrote:
On 28/09/16 23:24, Derick Rethans wrote:
On Wed, 28 Sep 2016, Paul Eggert wrote:
On 09/28/2016 11:28 AM, Howard Hinnant wrote:
The version file does not appear in the uncompressed tzdata2016g database.
It is in the tzdb-2016g.tar.lz and tzcode2016g.tar.gz tarballs, so you can grab it from there for now. I omitted it from the data tarball because it is not needed for data. This was done as part of the patch for more-accurate release numbers circulated on Sept. 5 <http://mm.icann.org/pipermail/tz/2016-September/024054.html>.
I think it makes much more sense to put it in the data file, then in the code file. The data file (or its contents) is what is being distributed, and hence the versioning of that makes IMO more sense.
It looks like the patch adds the version to both the tzcode and tzdata tarballs,
And Paul mentioned that in the paragraph I snipped out. :)
although the NEWS item only mentions tzcode, and the patch description only mentions tzdata!
The NEWS item could do with being updated though. -- -=( Ian Abbott @ MEV Ltd. E-mail: <abbotti@mev.co.uk> )=- -=( Web: http://www.mev.co.uk/ )=-
participants (18)
-
Alexander Belopolsky
-
Brian Inglis
-
christos@zoulas.com
-
Derick Rethans
-
Guy Harris
-
Howard Hinnant
-
Ian Abbott
-
Jon Skeet
-
Lester Caine
-
Martin Burnicki
-
Paul Eggert
-
Paul Ganssle
-
Paul Goyette
-
Paul.Koning@dell.com
-
Random832
-
scs@eskimo.com
-
Tim Parenti
-
Tom Lane