[Proj] Problems with /Op option used to compile proj in Microsoft Visual Studio

Mikael Rittri Mikael.Rittri at carmenta.com
Thu Mar 15 04:59:34 EST 2012


Here is an interesting article:

David Monniaux.
The pitfalls of verifying floating-point computations.
ACM Transactions on Programming Languages and Systems 30, 3 (2008) 12.
http://hal.archives-ouvertes.fr/hal-00128124

Best regards,

Mikael Rittri
Carmenta
Sweden
http://www.carmenta.com

________________________________
From: proj-bounces at lists.maptools.org [mailto:proj-bounces at lists.maptools.org] On Behalf Of Calogero Mauceri
Sent: Thursday, March 15, 2012 9:06 AM
To: PROJ.4 and general Projections Discussions
Subject: Re: [Proj] Problems with /Op option used to compile proj in Microsoft Visual Studio

Hi Janne,



Disabling that option, the result returned in the version of proj

compiled with Microsoft Visual C++ 2003 is more consistent.

Is there any reason why the /Op option is used to compile the proj?





I think that the /Op option only works with the 2003 version and what it does is as follows

(text from MSDN):





"This option improves the consistency of floating-point tests for equality and inequality by disabling optimizations that could change the precision of floating-point calculations.



By default, the compiler uses the coprocessor's 80-bit registers to hold the intermediate results of floating-point calculations. This increases program speed and decreases program size. However, because the calculation involves floating-point data types that are represented in memory by less than 80 bits, carrying the extra bits of precision (80 bits minus the number of bits in a smaller floating-point type) through a lengthy calculation can produce inconsistent results."



http://msdn.microsoft.com/en-us/library/aa984742(v=vs.71).aspx





So it is rather obvious that MS compilers use the full 80 bit FPP registers in PC processors by default and in most cases and the reason for using that switch in the first place must be the idea to have similar performance with other ANSI compiler environments? I am not sure if this answered the question?



regards: Janne.



Ok, the /Op option seems to be used to produce consistent and repeatable results.
The problem is that the cost of that repeatability is that the intermidiate results in computations *lose accuracy*.
That explains the problem proj (compiled with Visual Studio 2003) is having in the conversion I stated before.


It is well explained here (text from MSDN)

http://msdn.microsoft.com/en-us/library/aa289157%28v=vs.71%29.aspx

"Many C++ compilers offer a "consistency" floating-point model, (through a /Op or /fltconsistency switch) which enables a developer to create programs compliant with strict floating-point semantics. When engaged, this model prevents the compiler from using most optimizations on floating-point computations [...]. The consistency model, however, has a dark-side. In order to return predictable results on different FPU architectures, nearly all implementations of /Op round intermediate expressions to the user specified precision; for example, consider the following expression:



float a, b, c, d, e;

. . .

a = b*c + d*e;



In order to produce consistent and repeatable results under /Op, this expression gets evaluated as if it were implemented as follows:



float x = b*c;

float y = d*e;

a = x+y;



The final result now suffers from single-precision rounding errors at each step in evaluating the expression. Although this interpretation doesn't strictly break any C++ semantics rules, it's almost never the best way to evaluate floating-point expressions. It is generally more desirable to compute the intermediate results in as high a s precis ion as is practical. For instance, it would be better to compute the expression a=b*c+d*e in a higher precision as in,

double x = b*c;

double y = d*e;

double z = x+y;

a = (float)z;

or better yet



long double x = b*c;

long double y = d*e

long double z = x+y;

a = (float)z;


When computing the intermediate results in a higher precision, the final result is *significantly more accurate*. Ironically, by adopting a consistency model, the likelihood of error is increased precisely when the user is trying to reduce error by disabling unsafe optimizations. Thus the consistency model can seriously reduce efficiency while simultaneously providing no guarantee of increased accuracy. To serious numerical programmers, this doesn't seem like a very good tradeoff and is the primary reason that the model is not generally well received.
"



More information about the Proj mailing list