GMP 6 the incomatible GMP?

Torbjorn Granlund tg at
Mon Jan 7 23:39:57 CET 2013

We have discussed, perhaps mostly off-list, to fix several nuisances in
the library that will break binary compatibility.

There is actually a terse page about it (linked from the developer's

I would like to lift the precision limit (for mpf) and size limit (for
mpz, mpq) by changing how mpz_t (etc) works.

We could simply change the int fields to size_t, but that would make the
structure a lot larger.  Programs that need to juggle lots of mostly
small GMP numbers would suffer.

The idea is to instead make a 64-bit bitfield of the _mp_size and
_mp_alloc fields.  One would give _mp_size about 48 bits, and _mp_alloc
the remaining 16.  The alloc field would lose granularity, but should
code small sizes exactly.

I haven't worked out the details, but I think this can be done without
overhead for small allocations (say, up to 255 limbs) and that for
larger sizes the overhead will be negligible compared to the time used
for the computation.  With clever coding, a small result which is about
to be written to an operand which has previously been large enough to
have had its alloc field coded for large sizes, a plain comparison like
needed_alloc < ALLOC(r) would return true.  There will only be false
positives for larger needed allocation, where a reallocation will be
assumed to be needed, but then decoding the _mp_alloc field would
conclude new allocation wasn't really needed.

Why should we consider this?  Well, we today cannot handle operands >
2^37, which is hardly "arithmetic without limitations" on today's
computers.  And the new small primes multiplication code (the latest
version at will boost huge
multiplication enough to make operands of 2^40 bits feasible.

(It will be challening to test GMP for greater sizes, since one operand
of 2^37 is 16 GiB of RAM.)


More information about the gmp-devel mailing list