[Proj] Re: Re: Proj4 Bug (rtodms)
Glynn Clements
glynn at gclements.plus.com
Fri Nov 10 14:05:13 EST 2006
Gerald I. Evenden wrote:
> > > The numbers have effectively right and exact representations in ten-bytes
> > > format, but the compiler instead of making r/60 makes r * 1/60 so it
> > > changes a little thing. I precise that this is done without any
> > > optimization (force no optimization) and with option "don't correct FDIV
> > > flaw" (for old processors) and "no quick floating points".
> >
> > In that case, the Borland compiler is entirely useless for any code
> > which makes non-trivial use of floating-point arithmetic.
> >
> > Even if this particular issue can be worked around, you are likely to
> > encounter similar issues time and time again.
> >
> > Any compiler written by someone who doesn't understand that x/y and
> > x*(1/y) aren't equivalent simply isn't worth the disk space it
> > occupies.
>
> That argument does not hold too well. I altered the origin problem as
> follows;
>
> #include <math.h>
> #include <stdio.h>
> static int res1,res2;
> int main()
> {
> double r=240.0;
> // double tmp = 1./60.;
> double tmp = 0.016666666666666666666666666666666666666;
> // res1 = r / 60.;
> res1 = r * tmp;
> res2 = floor(r / 60.);
> printf("%s\n",(res1==res2) ? "OK!" : "NOT OK!");
> printf("res1: %d res2: %d\n", res1, res2);
> // getch();
> }
>
> Both cases (1/60., 0.01666--) run as:
>
> ./a.out
> OK!
> res1: 4 res2: 4
>
> using a temporary value for "1/60." a la Borland and in both cases the
> procedure ran "OK."
FWIW, when I try this with:
gcc (GCC) 3.3.6 (Gentoo 3.3.6, ssp-3.3.6-1.0, pie-8.7.8)
on a Pentium 4, I get "OK" if I compile with optimisation (-O1 or
higher), and "NOT OK" if I compile either without optimisation, or
with both optimisation and -ffloat-store. I get the same result
regardless of whether tmp is initialised with 1./60. or 0.01666...
Also, if I use "long double" throughout (including adding a trailing L
to the FP literals), I get "OK" regardless of the optimisation
settings.
None of which is surprising when you consider the value of 1/60 at
various precisions:
precision bits value
single 32 0.0166666675359010696411133
double 64 0.0166666666666666664353702
extended 80 0.0166666666666666666674572
At double precision (64 bits), the rounded value is less than 1/60,
while at extended precision (80 bits) it's greater than 1/60. The
result of r*tmp would be similarly be less than or greater than 4.0.
As both the integer cast and floor round positive numbers downwards,
using double precision would result in a value slightly less than 4.0
being rounded down to 3, while extended precision would result in a
value slightly greater than 4.0 being rounded down to 4.
The x86 FPU has 80-bit registers, but "double" values will be reduced
to 64 bits when stored in memory. Optimisation makes it more likely
that intermediate results will be kept in registers (and thus not
truncated), while -ffloat-store causes all FP values to be stored into
memory then read from there, forcibly discarding any excess precision
which the FPU may provide:
`-ffloat-store'
Do not store floating point variables in registers, and inhibit
other options that might change whether a floating point value is
taken from a register or memory.
This option prevents undesirable excess precision on machines such
as the 68000 where the floating registers (of the 68881) keep more
precision than a `double' is supposed to have. Similarly for the
x86 architecture. For most programs, the excess precision does
only good, but a few programs rely on the precise definition of
IEEE floating point. Use `-ffloat-store' for such programs, after
modifying them to store all pertinent intermediate computations
into variables.
Of course, none of this is relevant to computing 240.0/60.0, which
isn't affected by optimisations[1] or rounding modes.
[1] Except for -funsafe-math-optimizations (also used by -ffast-math),
which specifically enables this kind of approximation. It is never
turned on by any -O setting:
`-funsafe-math-optimizations'
Allow optimizations for floating-point arithmetic that (a) assume
that arguments and results are valid and (b) may violate IEEE or
ANSI standards. When used at link-time, it may include libraries
or startup files that change the default FPU control word or other
similar optimizations.
This option should never be turned on by any `-O' option since it
can result in incorrect output for programs which depend on an
exact implementation of IEEE or ISO rules/specifications for math
functions.
The default is `-fno-unsafe-math-optimizations'.
`-ffast-math'
Sets `-fno-math-errno', `-funsafe-math-optimizations',
`-fno-trapping-math', `-ffinite-math-only' and
`-fno-signaling-nans'.
This option causes the preprocessor macro `__FAST_MATH__' to be
defined.
This option should never be turned on by any `-O' option since it
can result in incorrect output for programs which depend on an
exact implementation of IEEE or ISO rules/specifications for math
functions.
--
Glynn Clements <glynn at gclements.plus.com>
More information about the Proj
mailing list