Torbjorn Granlund tg at
Thu Apr 25 15:50:24 CEST 2013

Todd Zimnoch <tzimnoch at> writes:

  I've recently started using GMP and was impressed by the number of CPU
  architectures supported with optimized assembly routines. I am curious if
  there's a plan or interest in supporting GPU processing in the future.
The reason why this has not been done, is that it is hard to get it to
perform well, except perhaps when computing with truly huge numbers.

I made a sort-of feasibility study of extending GMP with vector bignum
types, and then apply the GPU SIMD operations over these vectors.  Even
that turned out to be tricky, for a general purpose library.  The
problem is that one really wants operands to remain distributed in the
local memories of the processor units of the GPUs, which is not natural
in a library like GMP.

People also tend to overestimate the power of GPUs, in particular the
practical but also the theoretical performance.  Even with perfect
memory handling, it will only be around twice the CPU speed for a bignum
workload, for a GPU that cost in the same ballpark of a CPU.  And that
performance is only achievable (theoretically) at the cost of irksome
application programming.


More information about the gmp-discuss mailing list