Simple question about MPFR vs. GMP
linas at austin.ibm.com
Sat Apr 8 00:47:05 CEST 2006
On Fri, Apr 07, 2006 at 02:30:22PM -0300, Décio Luiz Gazzoni Filho wrote:
> On Apr 7, 2006, at 1:52 PM, Linas Vepstas wrote:
> >I have a rather naive question about MPFR vs. GMP that
> >I could not resolve by reviewing the MPFR web page. That
> >question is this:
> > Exactly HOW is MPFR better than GMP? Why, as a programmer,
> > should I care about exact rounding?
> >If I'm writing a program to compute some function to 400
> >decimal places, I have to do a fair bit of logic and thinking
> >to make sure I actually obtained 400 bits of accuracy. A bit
> >of rounding error here and there doesn't substantially change
> >my eror estimates .. and so, why should I care about "exact
> The answer to that is very complex. I suggest you grab a book on
> numerical analysis to know how certain simple measures as guard bits/
> digits and exact rounding help improve the accuracy of computations
> by a lot.
:-/ Not sure I like that answer. I've got a PhD in physics, much
of which revolved around numerical computations. Since then, I've
worked as a professional programmer, and have coded various
numerical codes. I've even worked with people who design FPU's (!)
Recently, I used gmp to compute certain number theory sums
to 3000 decimal digits. And, after all of that experience, your
reply is "It's impossible to explain, go read a book"?
If one is promoting an idea (or a piece of software), one must
be able to explain the utility of that idea in terms that the
potential users/customers can understand; otherwise it will be
> Also consider the fact that, since rounding is not exactly defined,
> results may vary from one version of GMP or architecture to another
> (perhaps even from one compiler to the other). This is how floating
> point used to work in the early days and I heard people weren't very
> pleased with the state of affairs then.
Yes, well, I actually lived through those "early days", and it
wasn't quite like that. Floating point doubles had 53 bits
of accuracy. Precious-few algorithms are strongly convergent
(negative lyapnov exponent), and "good" algorithms were
"neutral" operations (lyapunov exponent zero); these tended to
mangle the bottom bit. If you're summing N numbers (a "neutral"
operation), you'd expect to loose accuracy, best case, as
sqrt N if the rounding on the last bit was fair. Which often
So between this, and glitches in the bottom bit(s) in core math
libs, even quite reasonable math simulations would loose half
of those 53 bits of accuracy, or more.
By standarddizing to IEEE math, some of the "dumb" or "fixable"
problems got fixed, but this doesn't mean that one can blithly
ignore sources of inaccuracy in your computations. IEEE rounding
is not a magic cure-all.
Soo ... does mpfr implement "guard bits"? If so, the web page
didn't seem to mention it.
If I wanted to improve my algo by making use of "exact rounding",
(http://en.wikipedia.org/wiki/Exact_rounding ?? maybe
http://www.google.com/search?q=Exact+rounding ?? well,
the web material seems pretty thin...) how would I do this?
What would I do differently, than if I was using GMP straight-up?
I'm not looking for a complete answer; I'm just looking for the
few paragraphs that would be pursuasive enough for me to take
the energy to learn more.
More information about the gmp-discuss