# speed of mpq_cmp vs mpq_sub

paul zimmermann Paul.Zimmermann at inria.fr
Mon May 23 21:42:21 UTC 2016

```       Dear Torbjörn,

> From: tg at gmplib.org (Torbjörn Granlund)
> Date: Mon, 23 May 2016 23:09:53 +0200
>
> Some more thoughts on this.  Consider a/b vs c/d.  Assume all are
> non-negative for simplicity.
>
> Once we have determined that a != c or b != d, we know a/b != c/d since
> we require canonicalised input.
>
> That means that we could develop the actual binary fraction a/b and c/d
> bit for bit (or limb for limb, or limb-block for limb-block) until the
> generated bits differ.
>
> Why is this interesting?  Because it suggest an iterative algorithm
> which on average would terminate very quickly.
>
> Instead of actually computing fractional quotient bits, one could com-
> pute the continued fractions in parallel, i.e., essentially two parallel
> gcd ops.  (And one could then choose between computing cfrac(a,b) plus
> cfrac(c,d) or else cfrac(a,c) plus cfrac(b,d).)
>
> Somewhat similarly, one could do the cross-products blockwise top-to-
> bottom, again nornally terminating early.  One would never need to
> generate all product bits for both cross-products, since a difference
> will occur before.

if I understand correctly, you propose to compute the partial quotients of
a/b and c/d in parallel, and stop as soon as two partial quotients differ?

sage: continued_fraction(113/354)
[0; 3, 7, 1, 1, 7]
sage: continued_fraction(98/307)
[0; 3, 7, 1, 1, 6]

However in the worst-case, this will cost O(M(n) log(n)), assuming a fast
algorithm, whereas the cross product a*d-b*c would cost O(M(n)) only. Thus
you should stop the partial quotient computation at 1/log(n) of the total
size (or before if using a quadratic continued fraction algorithm).

Paul
```