mpf: which bug should we correct? (doc or code)

bodrato at mail.dm.unipi.it bodrato at mail.dm.unipi.it
Sun Jun 1 06:20:19 UTC 2014


Dear developers,

Playing with https://gmplib.org/devel/lcov/ , I realised that many
branches of the mpf_ui_sub code was unused. Then I tried to improve the
test-suite...

Now the problem is: what do we want (and should we test) from mpf functions?

The manual [ https://gmplib.org/manual/Floating_002dpoint-Functions.html ]
reads: "Each function is defined to calculate with “infinite
precision” followed by a truncation to the destination precision".

This claim means that we should take care of any carry or borrow coming
from the "infinite" tail of numbers. Is this desirable?

On the other side, current test suite relies on checks like the following:

  mpf_reldiff (rerr, got, want_reference);
  if (mpf_cmp (rerr, limit_rerr) > 0) {
    printf ("ERROR [...]");
    abort ();
  }

I.e. the relative difference is checked against an error-limit, we do not
check that the result exactly is truncated-infinite-precision.

Now, I extended the test suite, and this forced me to replace the ui_sub
function... but extending it further would reveal others behaviour of the
mpf_add/sub functions that are not coherent with the claim in the manual.

Assume precision is one hex digit, and two hex-digits are typically
computed, which result do we expect from the following subtraction:
2.000...001 - 1.000...002 ?

The "infinite precision" computation gives 0.FFF...FFF that should be
truncated to 0.FF , but current code truncates the operands _before_
actually computing the difference, so that the result given by the current
library is 1.0

On the other side, if we add 0.FFF...FFF with 0.000...001, the current
code for mpf_add truncates giving 0.FF as a result (should be 1.0), but if
we subtract 1.0 - 0.FFF...FFF, current code correctly gives 0.000...001

Another example, which result do we expect from 2.0 - \epsilon (with
arbitrary small \epsilon) ? truncated-infinite-precision would give 1.f ,
but then subtracting again \epsilon from the result would give 1.e, then
1.d, and so on... Is this the result we want from the library? (current
behaviour depends on how much smaller that the precision \epsilon is...)


That's why I ask here.
Should we correct the code (and the test suite!) to adhere to the
truncated-infinite-precision claim?
Or do we prefer to change the documentation to confirm the
limited-relative-error approach that we check with the current testing
programs?

Should we correct the bug in the code or in the docs?

If we decide for the relative-error; is it acceptable that two functions
(e.g. sub_ui and sub) give different results (within the tolerated
relative-error, of course)?

Regards,
m

-- 
http://bodrato.it/papers/





More information about the gmp-devel mailing list