GMP Floating point design vs Fractal applications
White, Jim (ITD)
Jim.White@DEFRA.GSI.GOV.UK
Mon, 30 Jun 2003 13:00:46 +0100
> From: "delta trinity" <deltatrinity@hotmail.com>
> Well, I guess it would be possible to represent an exponent as
> multi-precision. But is it really practicle? One of the goal is to make
> GMP as fast as possible. Having to take into account for a mp exponent
> would probably make GMP slower. Plus, when will you really have to
> represent such a number?
> I could see fractal applications, where you could 'zoom-in' indefinitely.
> But there, for the fractal algorithm to be able to show valid result under
a
> very very small exponent, you would somewhere need to keep track of a very
> large precision mantissa (much larger than practical memory and
computation
> speed limit).
It was through my work on a Windows-based fractal application that I became
aware of GMP, so I can offer some comments in this area.
I agree with Deltatrinity that multiple-precision exponents are not
generally required for fractal processing, it's the extended mantissa
precision that is most important.
Other critical issues for fractal processing include speed (the
calculation volumes can be rather extreme) and the availability of
mathematical functions.
I expand on these below ....
(PS - apologies, deltatrinity, I was unable to get messenger working
due to some firewall/proxy problems at work, probably designed to
prevent me doing just that!)
1. FP accuracy for fractals
Typical fractal processing involves the study of dynamic systems using an
iterative evaluation model with feedback (which is why "fractals" and
"chaos" are commonly associated). A rectangular grid of 2d co-ordinates
is chosen, and for each grid point, the system is evaluated using that
point as the initial data for the system. Iteration proceeds for each
point until either a preset iteration limit has been reached, or some
other "escape" condition is met (e.g. exceeding a preset magnitude for
the iteration result).
Zoom-in involves reducing the real difference between adjacent grid
points - as these increments become smaller and smaller, the number
of mantissa digits required increases.
The size of the exponent is less problematic - a 32-bit exponent will
be sufficient in all but the most extreme cases. It's the accuracy
of the mantissa that is of primary significance (pun intended)! The
ability to be able to choose the mantissa size is also clearly an
advantage.
So the system of representation used by GMP is, IMHO, ideally suited
to fractal processing.
2. Estimating FP Calculation Volumes for fractals
The other characteristic of fractal processing is the high volume of
calculations required. For each 1000 units (pixels) of grid size,
there are a million system evaluations to perform. The average
number of iterations required for each point can vary from 100's
to 1000's.
So, processing a 1024x1024 grid (a relatively modest resolution),
with an average of 1024 iterations per point, requires 2^30
iterations (a "gigacycle").
Each individual iteration can itself require several calculations.
Typically the calculation will be a non-linear function in a complex
variable - a common formula is Z(n+1) = Z(n) ^ A, where Z and A are
complex. The initial value Z(0) is typically set to (x + iy), where
(x,y) is the corresponding grid point.
Taking the simpler case, where "A" is real, the calculation Z ^ A
requires the following FP operations:
- multiplications (5)
- exponentiation
- sqrt()
- sin()
- cos()
- atn()
- various additions/subtractions
If "A" is complex, then we have 3 additional multiplications, and
additional function evaluations log() and exp().
It should be posible to come up with a number, Q, that
estimates the average "cost" of such a calculation in terms of
the number of FP multiplications.
The GMP cost of our fractal processing can then be estimated
as Q x 2^30.
Note:
I'd be grateful for input here, from those with knowledge of
the MPFR functions, regarding a reasonable guess for Q.
You can see that fractal processing means a lot of work!
If Q, in this example, turned out to be 50, then we have the
equivalent of 50 billion FP multiplications to perform, in order
to generate an image of relatively modest resolution (1024 x 1024).
3. GMP's speed
It is worth noting that GMP's performance combined with the increased
clock-rates of modern PC's means that the computing of 50 billion
extended-precision FP mult's on a desktop PC is now quite a realistic
task.
On a Pentium 4 (2Mhz) using deltatrinity's GMP dll, and with
precision set to 256 bits, I can do 2 million mpf_mul's in one second!
So, 50 billion operations is achievable in 6-8 hours (an
impossible result just a few years ago).
By comparison, using a generic (i.e. not CPU-optimised) package such as
MAPM increases this time by a factor of 30, which renders the task
impractical.
4. FP functions library
GMP's representation and excellent performance make it
a very serious candidate for extended-precision fractal processing.
BUT ...... I can't actually use it for this purpose just yet!
The library of standard mathematical functions is critical
for fractal processing. MPFR apparently will offer this but
at this stage it can't be used in my situation (Win32
environment, dll-reliant).
So I have to wait for the MPFR module to be made available
via the GMP DLL itself (and I believe this is not possible
at the moment), or as a separate library that calls
the GMP dll, but which can itself be easily built (i.e. so
I could build a MPFR dll, which would then call the GMP dll).
So, I'll just have to wait .....