[OSRS-PROJ] Significant digits in parameters
Clifford J Mugnier
cjmce at lsu.edu
Sun Jul 28 13:01:22 EDT 2002
Gentlemen:
Nanometer precision is meaningless if the transformation does not
produce the published and legally legislated results of a nation's Grid
system. For that reason, every country has an explicit formula published
along with a truncation to an infinite series. That, and only that
transformation is correct. Adding additional terms to allow
transformations to be "precise" to greater distances from the projection
origin is WRONG!
The only time "cleverness" is allowed in transformations in
association with National Legal Coordinate Systems is for the inverse case
where one goes from Grid to Geographic. In that case, the published
formulae for the inverse case may be inadequate to allow one to obtain the
original result of the direct case. In those cases, and only in those
cases, one may use additional terms (7th, 8th, ... etc. derivatives), or an
iterative procedure to allow perfect "return" to the original coordinate
when using the specific series truncation mandated by that nation for the
direct transform.
There are specific truncations for Transverse Mercator that include
Gauss-Conform, Gauss-Schreiber, Gauss-Boaga, Gauss-Li, Gauss-Krüger, etc.
For Oblique Mercator there are Hotine, Laborde, and Rosenmund. For Lambert
there's several, and there's several for Oblique Stereographic, etc., etc.
This is not bean-counting, it's Applied Geodesy. Applied Geodesy is what
countries use in their legal coordinate systems for defining their
international boundaries, their private property boundaries, their
national-provincial boundaries, etc. It is used everywhere, it's not just
theory, and it's damned difficult to research. But it's there, it exists,
and it ain't theory.
Single precision is rarely useful except on 64-bit machines; a machine's
episilon or internal precision is nice to know, but you have to match the
legal system for "it" to be correct. Arguing about the number of digits
the semi-major axis is published to, and using that as the justification
for computational precision and significant digits is specious in this
context. When your young son in uniform is on a frontier and staring down
the barrel of a cannon, the defense of your country's border is a matter of
legalities discussed by diplomats and defined by specific truncations of
series. World War I was a prime example of such ignorance by the U.S. when
the U.S.Coast & Geodetic Survey incorrectly added terms to the formulae for
the Lambert Conic in the Nord de Guerre Zone of France and Belgium. The
current standard is computational precision to a tenth of a millimeter for
the direct transform AS PUBLISHED BY A SOVERIGN NATION, and the inverse
transform must "return" to the original geographic coordinates.
My two cents.
Regards,
Clifford J. Mugnier (cjmce at LSU.edu)
Chief of Geodesy
CENTER FOR GEOINFORMATICS
Department of Civil Engineering
LOUISIANA STATE UNIVERSITY
Baton Rouge, LA 70803
Voice and Facsimile: (225) 578-8536
Pager: 1-(888) 365-5180
================================
http://www.ASPRS.org/resources.html
http://www.ce.LSU.edu/~mugnier/
================================
Craig Bruce wrote:
> Gerald Evenden <gerald.evenden at verizon.net> wrote:
>
> > Using a larger number of digits in projection parameters will, for a
> > variety of reasons, not increase the accuracy of the results For
example,
> > the major axis is only spec'd to a 7 digit integer, thus the accuracy
> > of the x-y values are also only to the nearest meter regardless of the
> > significant digits of the other parameters.
>
> I realize that what you are saying is geomatically correct, but talking
> about software issues. It's advantageous if different software or even
the
> same software produces results that are either the same or very close to
> being the same for the same input.
I presume you mean that prog A produces the same results as prog B
to 15+ digits. I am sorry but I couldn't disagree with you more for
several reasons:
1) it would make an observer of the tabulation of results assume that
these values were accurate to the printed significance.
2)different programs that are perfectly usable in the case of say 1mm
precision will create chaos when compared with another program accurate to
1nm precision. Which program is correct? Which program do I throw
out?
Overprinting the precision is just a bad practice.
> For this reason, the best plan is to
> represent all parameters and calculations using full 'double' precision,
> even if it contains many more digits than are geomatically significant.
> If the rest of the system computes values using full double precision,
> why would you want to unnecessarily remove consistency with other systems
> especially when the solution is as simple as changing the "%.9f" to a
> "%.16g" when auto-generating the tables.
I am not sure what is going on in the following paragraph but I must
say that the discussed method has a serious defect if it is dependent
upon knowing the precision of the input but has no way of having
this condition as controlling input and merely depends upon the
precision of double precision.
> There are many places where round-off error control is applied and
> full precision allows programs to produce the same results (e.g., in
> error-controlling the translation of projection coordinates to integer
> pixel coordinates) and to make optimizations when computed coordinates
> align properly. A lot of software doing error control may not even know
> that it is dealing with ground coordinates, so it may assume that there
> are 12-15 significant digits (or "consistent digits" in our case).
----------------------------------------
PROJ.4 Discussion List
See http://www.remotesensing.org/proj for subscription, unsubscription
and other information.
----------------------------------------
PROJ.4 Discussion List
See http://www.remotesensing.org/proj for subscription, unsubscription
and other information.
More information about the Proj
mailing list