Date: Sun, 11 Oct 1998 12:22:03 +0100 From: Markus Kuhn <Markus.Kuhn@cl.cam.ac.uk> (double) ((t1.sec - t2.sec) + (t1.nsec - t2.nsec) / 1.0e9) where t?.sec is at least 64-bit int and t?.nsec is at least 32-bit int. Can you really construct input values that will lead to your claimed double rounding error Sure. Let's use the Sparc IEEE implementation, which straightforwardly maps `double' to IEEE 64-bit double, and let's assume round-to-even, which is the IEEE default. Then here are example input values: t1.sec = 9007199254740993 (i.e. 2**53 + 1) t1.nsec = 1000000000 (i.e. 10**9) t2.sec = t2.nsec = 0 The exact answer is 9007199254740994 (i.e. 2**53 + 2), a number that is exactly representable as an IEEE double. But the expression above yields 9007199254740992 (i.e. 2**53) -- it is off by 2. There is a straight forward way to represent the difference as a 96-bit struct xtime value. The code should be completely obvious, The code _should_ be obvious, but it's very likely that people will get it wrong in practice. The bugs in your example code are minor in comparison to some of the stinkers I've seen in real life. Let's use a less error-prone approach. I consider it unacceptable that timestamps become less precise the farer we get away from the epoch, I assume that most applications are perfectly happy with floating point values used in their own calculations Sorry, I don't follow you here. If it's unacceptable for timestamps to become less precise, why is it acceptable for time differences to become less precise? After all, a timestamp is merely a time difference from an epoch. And people use time differences to compute timestamps all the time, so errors in time differences will cause errors in timestamps.