Quick question about PP and PP_INVERTED defines
Torbjörn Granlund
tg at gmplib.org
Sat Jun 18 16:09:23 CEST 2022
Brett Kuntz <kuntz at shaw.ca> writes:
Hello, for a quick background on my question: I have been working on
large (100k++) prime numbers recently and discovered that the
primality testing algorithms could use some improvements, both to
We always look for improvements! Which ones do you have in mind?
reduce cpu operations and also to reduce insane page faults (50k
faults per second in some cases). I have been analyzing
Ouch, insane page faults are no good.
If you cannot fit "large 100k++ prime number" (not sure exactly what
that means) into the RAM of your computer, that I suggest that you get
yourself some more RAM. I here guess that you are looking for primes
with around 100.000 decimal digits, and that really should never need
more than a few hundred KiB of RAM.
Unless you've written a buggy GMP application which leaks a lot...
buffers. It is pretty ugly now.". Since the first miller test is to
base 2 and that the mod will always be an odd number I think I can
make a custom function specifically for the miller base 2 test...
I think a special modexp function for small bases could indeed be useful.
I get what PP is, and I get why it's needed. But I do not understand
what the PP_INVERTED define is, how it's calculated, or why?
2^128/PP with the most significant bit suppressed.
See https://gmplib.org/~tege/divcnst-pldi94.pdf
Why is there a PP_INVERTED for 32/64 bits but not the others?
Which "others" do you have in mind?
I am trying to over-haul the primality testing specifically for 64-bit
limbs and VERY large numbers, so any help would be greatly
appreciated.
You claim GMP is slow, not us. We actually disagree. But we'd love to
hear about the results of your efforts at making GMP fast.
--
Torbjörn
Please encrypt, key id 0xC8601622
More information about the gmp-discuss
mailing list