# Three technical remarks about GMP functions (Corrective Message)

Décio Luiz Gazzoni Filho decio at decpp.net
Tue Jun 7 15:32:40 CEST 2005

```On Jun 7, 2005, at 4:09 AM, NUNES wrote:

> Please, this message replace the preceeding one just sent.  Excuse me!
>
> Thanks all for your prompt answers. But, I think I must to
> complement my questions.
>
> At http://mathworld.wolfram.com/StrongPseudoprime.html on can read
> that
>
> "The strong k-pseudoprime test for k = 2, 3, 5 CORRECTLY IDENTIFIES
> ALL PRIMES below 2.5*10^10 with only 13 exceptions, and if 7 is
> added, then the only exception less than 2.5*10^10 is 3215031751.
> Jaeschke (1993) showed that there are only 101 strong pseudoprimes
> for the bases 2, 3, and 5 less than 10^12, nine if 7 is added, and
>
> Besides, from Riesel's "Prime Numbers and Computer Methods for
> Factorization" (2nd edition -1994-p.91), one can also read:
>
> "As a matter of fact, the smallest strong pseudoprime to all the
> bases 2,3,5,7, and 11 simultaneously is the number 2152302898747,
> and THUS 5 STRONG PRIMALITY TESTS TO THESE BASES WILL PROVE A
> NUMBER BELOW THIS LIMIT PRIME (if it is). If we instead use the
> seven bases 2,3,5,7,11,13,and 17, EVERY NUMBER BELOW
> 341550071728321 CAN BE IDENTIFIED AS A PRIME OR COMPOSITE BY THIS
> TECHNIQUE. In practice we of course do not perform all seven tests
> first, and then decide on the characterof number. In most of the
> cases the test for base 2 already reveals the number as composed,
> and we are finished. If not, we pass on the next test and so
I think nobody denied the veracity of these statements here. It's
just that what you're proposing does not make sense for GMP.
> Summing up: Although the Strong Pseudoprimes Test should be used
> only to test the compositeness of numbers, within the above limits,
> it may serve to assure the PRIMENESS of a lot of numbers <
> 3.41*10^14 very quicly. So, my suggestion is the insertion of these
> criteria in the subroutine mpz_probab_prime_p (perhaps replacing
> the trial division already embedded in it), in order to improve its
> performance.
By replacing trial division by Miller-Rabin test only, you're making
an assumption that integers handed to the function are not random,
but rather more likely to be prime than a random integer. For random
integers:
-1/2 of them are divisible by 2 (which costs almost nothing to test,
using a logical AND instruction),
-1/3 of them are divisible by 3 (which costs one 64-bit division to
check, for integers < 2^64 ~ 2*10^19),
-1/5 of them are divisible by 5 (same cost as 3),

and so on. Now an integer up to 3.41*10^14 has 49 bits. Performing
modular exponentations on a random 49-bit integer should cost at
least 70 modular multiplications/squarings on average, I believe.
Now, compared to trial dividing by the first few primes, the amount
of effort required to determine the primality or compositeness of a
given integer is much higher. I mean, suppose you were given 1000
random integers to test for primality or compositeness:
-500 of them would be ruled out immediately with a simple AND
instruction,
-of the remaining, 500/3 ~ 167 would be ruled out with one division,
-of the remaining, 334/5 ~ 67 would be ruled out with another
division (so we're up to 2),
-of the remaining, 267/7 ~ 38 would be ruled out with another
division (so we're up to 3),

but let's stop here just to illustrate. We've ruled out about 771
integers composite, with an average cost of 2 divisions for each.
Compare that to pure Miller-Rabin, which would have required 70
multiplications and 70 divisions. Not to mention, division by 3, 5,
7, ... is computed much more quickly by any CPU than division by a
random 49-bit integer, perhaps twice as fast or more.

So, assuming your integers are random, applying some trial division
before a primality test makes a lot of sense. You should stop
applying trial division only when the cost of an operation, divided
by the probability that it will identify a random integer as
composite, is larger than the cost of Miller-Rabin. For integers of
the size you're considering, I wouldn't doubt it were in the vicinity
of 500. Of course, for larger values, the cutoff is much higher.

Oh, and if you're really so interested whether an integer is prime
instead of merely probably prime, why don't you get a specialized
package for doing this, like OpenPFGW or PRIMO? I would agree that
it's not GMP's goal to provide a full primality test, even for fairly
small integers like those. To me this just sounds like you read about
this property of the Miller-Rabin test somewhere and thinks it's the
greatest thing since sliced bread, but sorry, it isn't. You can write
a patch for your personal consumption if you want, but don't expect
others to follow.
> Apropos, I believe I know enough of Number Theory, in particular
> about the subject in question. By the way, when a manual (about
> hardware or software) has not sufficiently clear descriptions, I
> use to make several tests to discovery the ommitted information
> (e.g., my test of the prime 2^32-5 cited). Despite my long-life
> experience and expertise in computer programming, I thought
> somebody could help me with a few lines of a demo of the use of
> mpz_array_init in GNU PASCAL. On the other hand, I insist that
> DEMOS programs for GNU PASCAL USERS showing how to use GMP
> subroutines can be very instructive.
>
> At last, HOW ONE CAN USE the GMP subroutines mpz_array_init ,
> mpz_inp_raw, and mpz_out_raw, also in GNU Pascal?  This is the
> point!!!
I believe the port to Pascal is not done by the same developers of
the mainline C/C++ GMP. Thus they would not know how to answer your
questions.

(Of course, they can correct me if I'm wrong).

Décio

-------------- next part --------------
A non-text attachment was scrubbed...
Name: PGP.sig
Type: application/pgp-signature
Size: 186 bytes
Desc: This is a digitally signed message part
Url : http://swox.com/list-archives/gmp-discuss/attachments/20050607/634609f1/PGP.bin
```