[RFC][PATCH] mini-gmp: move memory allocation out of loops

Niels Möller nisse at lysator.liu.se
Sat Feb 10 17:11:05 UTC 2018


"Marco Bodrato" <bodrato at mail.dm.unipi.it> writes:

>> Hmm, but except if qp == NULL, can't we *always* reuse the qp area for the
>> shifted input? Something like (untested):
>
> Your proposal is better than mine, agreed.

I've run some tests and checked in.

>> I also note that we have a very similar allocation in
>> mpn_div_qr_2_preinv.
>
> Currently it is used just once in the whole mini-library, by the call:
> mpn_div_qr_2_preinv (qp, np, np, nn, inv);
>
> i.e. values in np get overwritten to store the reminder. If we take this
> single use into account, we can always shift inplace...

Note that remainder is always two limbs. So reusing qp area would be
more natural (but unlike np, qp == NULL is allowed). So not sure what
the nicest way is to do this. One could also consider moving the
shifting to the caller mpn_div_qr_preinv, which is specified to store
the remainder in the np area, and otherwise is free to destroy it.

Maybe the below is a reasonable way (note that the #if:ed out
mpn_div_qr_2 can't be easily supported with this interface).

Regards,
/Niels

diff -r 164971d5c8d0 mini-gmp/mini-gmp.c
--- a/mini-gmp/mini-gmp.c	Sat Feb 10 18:05:32 2018 +0100
+++ b/mini-gmp/mini-gmp.c	Sat Feb 10 18:08:22 2018 +0100
@@ -985,13 +985,12 @@ mpn_div_qr_1 (mp_ptr qp, mp_srcptr np, m
 }
 
 static void
-mpn_div_qr_2_preinv (mp_ptr qp, mp_ptr rp, mp_srcptr np, mp_size_t nn,
+mpn_div_qr_2_preinv (mp_ptr qp, mp_ptr np, mp_size_t nn,
 		     const struct gmp_div_inverse *inv)
 {
   unsigned shift;
   mp_size_t i;
   mp_limb_t d1, d0, di, r1, r0;
-  mp_ptr tp;
 
   assert (nn >= 2);
   shift = inv->shift;
@@ -1000,11 +999,7 @@ mpn_div_qr_2_preinv (mp_ptr qp, mp_ptr r
   di = inv->di;
 
   if (shift > 0)
-    {
-      tp = gmp_xalloc_limbs (nn);
-      r1 = mpn_lshift (tp, np, nn, shift);
-      np = tp;
-    }
+    r1 = mpn_lshift (np, np, nn, shift);
   else
     r1 = 0;
 
@@ -1027,27 +1022,12 @@ mpn_div_qr_2_preinv (mp_ptr qp, mp_ptr r
       assert ((r0 << (GMP_LIMB_BITS - shift)) == 0);
       r0 = (r0 >> shift) | (r1 << (GMP_LIMB_BITS - shift));
       r1 >>= shift;
-
-      gmp_free (tp);
     }
 
-  rp[1] = r1;
-  rp[0] = r0;
+  np[1] = r1;
+  np[0] = r0;
 }
 
-#if 0
-static void
-mpn_div_qr_2 (mp_ptr qp, mp_ptr rp, mp_srcptr np, mp_size_t nn,
-	      mp_limb_t d1, mp_limb_t d0)
-{
-  struct gmp_div_inverse inv;
-  assert (nn >= 2);
-
-  mpn_div_qr_2_invert (&inv, d1, d0);
-  mpn_div_qr_2_preinv (qp, rp, np, nn, &inv);
-}
-#endif
-
 static void
 mpn_div_qr_pi1 (mp_ptr qp,
 		mp_ptr np, mp_size_t nn, mp_limb_t n1,
@@ -1122,7 +1102,7 @@ mpn_div_qr_preinv (mp_ptr qp, mp_ptr np,
   if (dn == 1)
     np[0] = mpn_div_qr_1_preinv (qp, np, nn, inv);
   else if (dn == 2)
-    mpn_div_qr_2_preinv (qp, np, np, nn, inv);
+    mpn_div_qr_2_preinv (qp, np, nn, inv);
   else
     {
       mp_limb_t nh;

-- 
Niels Möller. PGP-encrypted email is preferred. Keyid 368C6677.
Internet email is subject to wholesale government surveillance.


More information about the gmp-devel mailing list