gmp 4.1 documentation improvement suggestions

Serge Winitzki
Mon, 9 Dec 2002 02:56:56 -0500 (EST)

> See also

I knew that site, but I could find very little documentation there.

> It still doesn't track anything, but it does give guarantees about
> rounding of results.

It's surprising that mpfr doesn't track precision. Its documentation
seems to say that the main difference between mpfr_* and mpf_* is that
mpfr knows the "actual exact number of precise bits" in any number. They
use this wording in the manual. This seems to suggest that e.g.
"10.010-10.000=0.0100" and the result will know that it has 2 precise
bits. At least I thought so after reading the mpfr manual and after 
glancing at the code for adding two numbers in mpfr_add. Maybe I was 

> > I just finished drafting my
> > own simple precision tracking implementation for our Yacas project,
> > without looking at mpfr or at another implementation, and it would be
> > interesting to understand how it works in mpfr. Are there any documents
> > or papers that describe it?
> You might be heading towards interval arithmetic.  The mpfr web page
> has some links on that, and some preliminary code using mpfr to do it.

I know a little bit about interval arithmetic, and it's not what I
wanted to do because I think it's too slow for general applications.
My idea was to create a "poor man's interval arithmetic" that costs only
a small constant time overhead per operation and yet helps track
roundoff errors and detects dangerous cancellations.

All real numbers x are represented by pairs {x,p} where p is an integer,
the "number of exact bits in x".

The formal meaning of p in {x,p} is that the absolute roundoff error of
x is less than Abs(x)*2^(-p) if x is nonzero, and less than 2^(-p) if
x=0. (a "floating zero").

Define the "bit count" of a nonzero real number x, n=B(x) as the
smallest integer n such that Abs(x)<2^n. This is the GMP function
mpz_sizeinbase(x,2) for positive integer x, for instance. Then all
arithmetic operations can be defined on pairs {x1,p1} and {x2,p2}, and
the tracking of p's only requires a few simple integer operations using
the bit counts of x1, x2, and sometimes of x1+x2. For instance, if we
write "1.0" (binary) for 1 +- 1/4, then subtracting "1.001-1.0" (binary)
gives a floating zero with absolute error 0.01 (binary); this is
represented in decimal as {1.125,4}-{1,2}={0,2}.

I have written a short document describing the required computations (it
will be included in the next release of Yacas) and a first
implementation that seems to work reasonably well so far (although it
overestimates the error in some cases). Even though this procedure is
trying not to underestimate the round-off error, it does not always give
a rigorous upper bound on the error; but it does in most cases detect
cancellations. The "floating zero" that bears a precision estimate
seems helpful too (at least you know you shouldn't be dividing by it).