[Proj] Re: Proj4 Bug (rtodms)
glynn at gclements.plus.com
Wed Nov 8 07:15:52 EST 2006
> > So far, I have not found any way to reproduce your problem and
> > consequently
> > find no reason to change code. However, I do have a couple of questions:
> this simple program produce a "NOT OK" output :
> #include <math.h>
> #include <stdio.h>
> static int res1,res2;
> int main()
> double r=240.0;
> res1 = r / 60.;
> res2 = floor(r / 60.);
> printf("%s",(res1==res2) ? "OK!" : "NOT OK!");
Then there is a bug in either your compiler or your CPU. The values
240.0, 60.0 and 4.0, and the calculation 240.0/60.0, are all exactly
representable in all common floating-point representations (including
all of the formats supported by the i386 family).
> I think that because floor corrects values that were epsilon under the
> absolute value, if epsilon is smaller than the floating point precision.
The value passed to floor() should be exactly 4.0; if it isn't,
there's a bug in either the compiler or the CPU.
> my compiler is Borland C++ compiler and I work on Windows XP, but I don't
> think that C runtime libraries at this point are differents ??? (I believe
> not because of portability with Kylix) ???
> > 1. on what hardware (chip) was this problem created,
> Intel Pentium
The early Pentium chips had a bug in FP division, but I don't think it
applied to cases where the result is exactly representable.
> > 2. what compiler/library was employed and
> Borland C (CW3230mt.dll)
> > 3, level of optimization specified.
Do you mean that you explicitly disabled optimisations or that you are
using the default settings. I wouldn't rely upon the default settings
not performing optimisations.
> > Interestingly, r before being divided by 60 does not have a fractional
> > component (exactly equal to 240.). The implication is that the floating
> > point processor produces a "ragged" result where the mantissa is not
> > rational. That is, something like 3.9999... is produced. The floor
> > operation should still have produced 3 as the unchanged code did with
> > simple
> > conversion to integer with the memory assignment of the division
> > operation.
> > I suspect something is gummed up with the handling of the extra bits that
> > are
> > part of the mantissa in the floating point hardware.
> I don't, but the hardware in the FPU works with 10 bytes instead of 8
> because of precision and exact numbers doesn't exist in FPU
Exact numbers most certainly do exist in an FPU. In fact, all of the
numbers which it handles are exact. In some cases, the result of an
operation may not exactly match the correct answer, but that's a
In particular, 240.0, 60.0 and 4.0 can all be exactly represented, and
there is no reason why 240.0/60.0 shouldn't produce exactly 4.0
(regardless of whether you use 32-bit, 64-bit or 80-bit precision).
OTOH, if the compiler decides to "optimise" the evaluation of r/60.0
to r * (1/60.0), you can expect to get some rounding error, as 1/60.0
isn't exactly representable. But the compiler shouldn't be doing that
if you have disabled *all* optimisations.
Glynn Clements <glynn at gclements.plus.com>
More information about the Proj