New thresholds in table

Torbjorn Granlund tg at
Thu Nov 17 17:13:19 CET 2011

nisse at (Niels Möller) writes:

  I've checked a few machines, and there the very small thresholds seem
  bogus. Some day ago I changed min_size up again, from 10 to 50. But I
  still got some suspiciously small thresholds on several machines,
  including on shell. Now I've checked in another tweak, to increasing
  stop_since_change from the default 80 to 150. Seems to solve the problem
  on shell, we'll see what happens on the other machines.
When you write that the threshold is "bogus", what do you mean?

Is there a problem in tuneup's algorithm measuring?  If tuneup claims
alg A is faster then alg B for an operand range, then if tuneup works
correctly, then algorithm A is really faster there, and should therefore
probably be used there.

I am not familiar with the HGCD algorithms and code, but a scenario
where an algorithm might be faster in disjoint ranges makes perfect

I suspect Barrett's algorithm could supplant plain schoolbook division
for timny operands, by using well-written mulhi and mullo.  The number
of word multiply operations would be about the same (at least the n^2
term) but the inherent serialiness of the quotent limb approximation
would not exist for Barrett.  Then for larger operands,
divide-and-conquer division would probably win, before Barrett comes
back for the largest operands (now using FFT based multiplicaton).


More information about the gmp-devel mailing list