Wraparound multiplicaton vs mullo

Niels Möller nisse at lysator.liu.se
Wed Oct 21 15:38:35 CEST 2009


David Harvey <dmharvey at cims.nyu.edu> writes:

> "I see no efficient way (below the FFT range) to compute mod
> x^2 + 1". This is the same reason you have given for not doing x^8 -
> 1 by working modulo x^4 - 1 and x^4 + 1. So what's the difference?
> There seems to be some inconsistency here.

Right, I'm confused, and of course it also depends on what "efficient"
means.

When I wrote that, what I meant was that I saw no way to evaluate in
polynomial roots without extending to a ring where computation is a
lot more expensive. x^2 + 1 is ok because we can extend to Z[i], where
we can do two points (i and -i) with three multiplicatons over the
original, so it's only 50% more expensive. I can't do anything similar
to x^4+1, either I'd have to work over something like Z[i, sqrt(2)],
or extend to Z_{2^n + 1} as for a proper FFT, which I think is going
to be a few times slower than the original ring for medium size
numbers.

What I missed was that evaluation in roots is not the only way to do
computaton mod x^4 + 1 in a reasonably efficient manner, one could do
an ordinary multiplicaton using toom4 and wraparound afterwards (and
since the goal is operations on integers, not polynomials, we can use
any suitable integer multiplication depending on the size, and wrap
afterwards.

Let M'(n) be the cost of multiplication mod 2^n - 1 using the
recursive method, and let M(n) be the cost of ordinary product of
n-bit numbers. To see what gain we can get compared to doing a plain
multiplication at top level followed by wraparound, we need to compare
M(n) to

  M'(n) = M'(n/2) + M(n/2) + O(n)
        = ... = M(n/2) + M(n/4) + M(n/2^k) + M'(n/2^k) + O(n)

Looks fairly good to me, but it's unclear to me if and when it's going
to be better than the non-recursive x^4-1 algorithm (5 M(n/4) + O(n))

/Niels



More information about the gmp-devel mailing list