"Clive D.W. Feather" <clive@demon.net> writes:
My inclination is to say "don't do that".
No can do. POSIX requires the od command to "do that". Here's the spec: <http://www.opengroup.org/onlinepubs/009695399/utilities/od.html> The only way to support (say) "od -t xL" is to use a %lx format, selected at run-time.
This can be tested at compile time:
(sarcasm on) Yes, we can go through millions of lines of code, looking for dozens or hundreds of places where programmers have made the very natural assumption that sizeof(int) <= sizeof(long), and rewrite them all to be portable to hosts where this assumption isn't true. No automated tool can do this today -- but sure, we can check it all by hand. This would take months -- years maybe -- but we've got plenty of spare time and our people love to do this sort of thing. (whew! sarcasm off. hope you didn't mind...) Seriously: it's not going to happen. We have better things to do with our limited resources. We have real bugs and real security holes to fix. That is what I was doing with od.c when your email arrived; see <http://lists.gnu.org/archive/html/bug-coreutils/2004-08/msg00026.html> for the result of my efforts. "Bugs" that are merely inventions of the standardization committee, and aren't a problem on any real host, will not get "fixed".
I'm still dubious about "a lot".
What can I say? I gave you one example, from code I was working on the minute I received your email (no lie!). As it happens this code is quite widely used, and widely portable, and it has safely made the sizeof(int)<=sizeof(long) assumption since before C89 came out. I could give you other examples but I'm afraid it sounds like your mind was made up before I started.
What's difftime.c doing that needs that assumption?
difftime's problem is slightly different. It's trying to subtract two POSIX time_t values and return a floating-point answer that is exactly correct, when possible. It can't simply subtract the time_t values, because they are typically integers and we might have integer overflow. And it can't simply convert to floating point and subtract the results, because that will lose information in some cases (e.g., if time_t is 64 bits and "double" is IEEE 64-bit double). So it uses a heuristic, based on the size of time_t, to decide what to do. This heuristic is that if sizeof (time_t) < sizeof (double), then time_t can be converted to double without losing information; and similarly for long double. This heuristic is not guaranteed by C but is true on all platforms that we know of. (If you know of any counterexamples, please let us know.) The heuristic is related to the C89 guarantee that sizeof bears a sane relationship to range, but it's not identical to that guarantee. As far as I know, there is no portable way in C89 or C99 to implement POSIX difftime; the heuristic is the best we have come up with so far. There is more explanation in the difftime source code.