# Complexity of GNU MP operations

Miodrag Petkovic msp at eunet.rs
Sun Apr 11 20:46:52 CEST 2010

Dear GNU MP experts,

Subject: SOME QUESTIONS ON THE COMPLEXITY OF BASIC OPARATIONS IN MULTIPRECISION ARITHMETIC

The efficiency of an iterative root-finding method (IM) can be successfully estimated using the coefficient of efficiency given by

E (IM)=r^{C }

or by alternative formula

E(IM)= log r/C.

Here r is the R-order of convergence of the iterative  method  (IM) and C is the computational cost.

Considering iterative methods for finding a simple real zero of a given real function, the computation cost C is usually expressed by the number of evaluations of this function  and its derivatives (at the same point or different points if multi-point methods are considered). However, the situation is quite different when we apply  iterative methods for the simultaneous determination of all zeros of an algebraic polynomial of degree n. In this case it is preferable to calculate the computation cost according to the number of arithmetic operations per iteration, taken with certain weights depending on the processor time. If these weights are denoted by w_A,  w_S,  w_M and w_D for addition, subtraction, multiplication and division, respectively, and A(n), B(n), M(n)  andD(n) are the numbers of additions, subtractions, multiplications and divisions in the realization of one iteration for all n zeros, then the computational cost $\theta$ can be (approximately) expressed as

C=C(n)=w_A *A(n)+w_S* S(n)+w_M *M(n)+w_D* D(n).

Then the computational efficiency is given by

E(IM,n)=log r/[w_A*A(n)+w_S*S(n)+w_M*M(n)+w_D*D(n)].

It  is reasonable to take the weights appearing in the above formula proportionally to

the number of cycles of the four basic operations or the numbers of flops/sec. These

performances depend mainly  on the precision of the employed computer arithmetic and hardware. This leads to the following problems especially of interest in multiprecision arithmetic:

(i) It is very difficult to find the requested characteristics  (the speeds of execution of basic operations) for a specific  processor.

(ii) Even if it is possible, a permanent development of  technology launches faster and faster processors (including  various hardware improvements in the execution of operations)  so that any specific choice of processors is of little worthy.

(iii) Since iterative methods of high convergence order  use multiprecision arithmetic, it is particularly difficult to provide information about the execution speeds in the case of arithmetic of arbitrary precision.

However, I assume that some relations among the execution speeds of basic operations exist at least approximately. For example, if the weights concerning addition and subtraction are normalized as w_A=w_S=1,  can one estimate the values (or a range of values) of the weights w_M and w_D for multiprecision arithmetic (desirably as a function of the used precision p? For example, is there an estimation for the intervals

Of course, insead of these times, number of flops/sec is welcoming.

I know that such an estimation is difficult since it depends on many factors (type of processors, programming languages, etc.), but I hope that you could help me with some instructions and suggestions. That would be of great benefit in researching computational efficiency of the mentioned type of iterative methods and their proper ranking when the computer algebra systems Mathematica, Mable, etc., are used.

Recently I asked the staff of Wolfram company whose package Mathematica is often used at the universities in my country, but they had no answers except that they employ GNU MP multiprecision arithmetic. This is the reason that I would like to here something about the mentioned topic directly from the experts familiar to GNU MP.

Miodrag Petkovic

Professor of Mathematics

University of Nis, Serbia