multi-threaded multiplication speed-up
Décio Luiz Gazzoni Filho
decio at decpp.net
Thu Jan 18 02:12:22 CET 2007
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Jan 17, 2007, at 6:43 PM, Richard B. Kreckel wrote:
> Pierrick Gaudry wrote:
>
>> This kind of parallelization at the application level looks indeed
>> vastly
>> superior to having the parallelization at the level of the
>> library. If an
>> algorithm allows it (and this is often the case), this is the
>> right thing
>> to do. On the other hand, there are also algorithms for which the
>> multiprecision operations are inherently sequential. Computing
>> digits of
>> Pi is one example.
>>
>
> I'm not disagreeing with your point, but let me just point out that
> computing digits of pi (or most other interesting constants) is quite
> amendable to parallelization.
More precisely, it should be stated that _some_ Pi computation
algorithms are amenable to parallelization, say computing it via
series expansions (since series computation is trivially
parallelizable, whether you're computing Pi or anything else.) Binary
splitting is also trivially parallelizable. On the other hand, an
algorithm like the Arithmetic-Geometric Mean (AGM) algorithm is
indeed inherently sequential, and you'll gain little if anything from
high-level parallelization.
Then again, computing N terms of a series in parallel requires N
times as much memory as computing a single term at a time, and memory
is often at a premium. On the other hand, a parallel FFT shouldn't
require any extra memory. (Though in the end, for non-trivial
computations, you'll need to use disk storage anyway, and your
bottleneck should be disk read/write speed, regardless of which
parallelization technique you may be using.)
Décio
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (Darwin)
iD8DBQFFrsl69zcAVrR+ETURAjoEAKCF2tB5AFkNDPGtknJDJ7jVNH11kwCgheg6
PMtFQ3N41HTVLgrOLfjV7zM=
=doPF
-----END PGP SIGNATURE-----
More information about the gmp-discuss
mailing list