GMP floating point numbers are stored in objects of type `mpf_t`

and
functions operating on them have an `mpf_`

prefix.

The mantissa of each float has a user-selectable precision, in practice only limited by available memory. Each variable has its own precision, and that can be increased or decreased at any time. This selectable precision is a minimum value, GMP rounds it up to a whole limb.

The accuracy of a calculation is determined by the priorly set precision of the destination variable and the numeric values of the input variables. Input variables’ set precisions do not affect calculations (except indirectly as their values might have been affected when they were assigned).

The exponent of each float has fixed precision, one machine word on most
systems. In the current implementation the exponent is a count of limbs, so
for example on a 32-bit system this means a range of roughly
*2^-68719476768* to *2^68719476736*, or on a 64-bit system
this will be much greater. Note however that `mpf_get_str`

can only
return an exponent which fits an `mp_exp_t`

and currently
`mpf_set_str`

doesn’t accept exponents bigger than a `long`

.

Each variable keeps track of the mantissa data actually in use. This means that if a float is exactly represented in only a few bits then only those bits will be used in a calculation, even if the variable’s selected precision is high. This is a performance optimization; it does not affect the numeric results.

Internally, GMP sometimes calculates with higher precision than that of the destination variable in order to limit errors. Final results are always truncated to the destination variable’s precision.

The mantissa is stored in binary. One consequence of this is that decimal
fractions like *0.1* cannot be represented exactly. The same is true of
plain IEEE `double`

floats. This makes both highly unsuitable for
calculations involving money or other values that should be exact decimal
fractions. (Suitably scaled integers, or perhaps rationals, are better
choices.)

The `mpf`

functions and variables have no special notion of infinity or
not-a-number, and applications must take care not to overflow the exponent or
results will be unpredictable.

Note that the `mpf`

functions are *not* intended as a smooth
extension to IEEE P754 arithmetic. In particular results obtained on one
computer often differ from the results on a computer with a different word
size.

New projects should consider using the GMP extension library MPFR (https://www.mpfr.org/) instead. MPFR provides well-defined precision and accurate rounding, and thereby naturally extends IEEE P754.