On Wed, Jul 09, 2014 at 08:16:12AM -0700, Paul Eggert <eggert@cs.ucla.edu> wrote:
repeating that discussion now. There's one thing new, though: we now have had significant practical experience. The earlier set of changes along these lines was published in release 2013e (2013-09-19), and it hasn't caused significant disruption in the
Something else is new, too: in the past, changes didn't have to cause "significant disruption in the field" to not be done. For example, a lot of care was used to make the tzcode portable even to weird and niche systems that, in many cases, probably don't even exist. You could doubtlessly rip out a lot of code without causing "significant disruption in the field" (if it works on gnu/linux, bsd and solaris, then failure probably isn't signficant...). The question is, why have these standards changed so drastically w.r.t. tzdata?
field. In practice it seems that end users don't much care about things like the time zone of Guadeloupe in 1899 -- which is probably a good thing, since the pre-2013e database was wrong anyway.
I think almost everybody agrees to that.
other reasons anyway. Besides, the tail should not be wagging the dog here: regression testing should be our servant, not our master.
You are shooting down your own strawmen here. In previous discussions, most people seem to have been concerned about stability of timestamps. not about accuracy, correctness or regression testing. -- The choice of a Deliantra, the free code+content MORPG -----==- _GNU_ http://www.deliantra.net ----==-- _ generation ---==---(_)__ __ ____ __ Marc Lehmann --==---/ / _ \/ // /\ \/ / schmorp@schmorp.de -=====/_/_//_/\_,_/ /_/\_\