Likely GMP bug

Niels Möller nisse at
Wed Jun 13 20:48:19 UTC 2018

"Marco Bodrato" <bodrato at> writes:

> You are saying that my patch
> was not an emergency workaround, but a cleverly subtle optimization?
> Well, I have to admit that I was not aware... but thanks :-)

Now I've done some benchmarking (on a broadwell x86_64 with gcc-6.3), using 

  ./speed -s 64 mpn_gcd_1 -c   # The -s argument is a bitsize, right?


  ulimb >>= (c + 1);

in the loop, I get 490 cycles. With

  ulimb = (ulimb >> 1) >> c;

this goes down to 448 cycles, a saving of 42 cycles or appr. 10%
speedup. I don't quite remember the analysis of binary gcd, but I'd
guess the average shift count is 2. Then we should get an average of 64
iterations, and the saving is 0.66 cycles/iteration.


Niels Möller. PGP-encrypted email is preferred. Keyid 368C6677.
Internet email is subject to wholesale government surveillance.

More information about the gmp-bugs mailing list