multi-threaded multiplication speed-up

Décio Luiz Gazzoni Filho decio at
Thu Jan 18 02:12:22 CET 2007

Hash: SHA1

On Jan 17, 2007, at 6:43 PM, Richard B. Kreckel wrote:

> Pierrick Gaudry wrote:
>> This kind of parallelization at the application level looks indeed  
>> vastly
>> superior to having the parallelization at the level of the  
>> library. If an
>> algorithm allows it (and this is often the case), this is the  
>> right thing
>> to do. On the other hand, there are also algorithms for which the
>> multiprecision operations are inherently sequential. Computing  
>> digits of
>> Pi is one example.
> I'm not disagreeing with your point, but let me just point out that
> computing digits of pi (or most other interesting constants) is quite
> amendable to parallelization.

More precisely, it should be stated that _some_ Pi computation  
algorithms are amenable to parallelization, say computing it via  
series expansions (since series computation is trivially  
parallelizable, whether you're computing Pi or anything else.) Binary  
splitting is also trivially parallelizable. On the other hand, an  
algorithm like the Arithmetic-Geometric Mean (AGM) algorithm is  
indeed inherently sequential, and you'll gain little if anything from  
high-level parallelization.

Then again, computing N terms of a series in parallel requires N  
times as much memory as computing a single term at a time, and memory  
is often at a premium. On the other hand, a parallel FFT shouldn't  
require any extra memory. (Though in the end, for non-trivial  
computations, you'll need to use disk storage anyway, and your  
bottleneck should be disk read/write speed, regardless of which  
parallelization technique you may be using.)


Version: GnuPG v1.4.5 (Darwin)


More information about the gmp-discuss mailing list