Side-channel silent modular inverse
Niels Möller
nisse at lysator.liu.se
Thu Dec 26 21:38:24 UTC 2013
Torbjörn and I have been talking a about maybe adding more side-channel
silent functions for the next release in the not very distant future.
Here's a sketch for the side-channel silent modular invert function
(untested, but adapted from a working function I wrote for Nettle last
spring).
As you can see, it depends on a couple of other functions,
mpn_sec_add_1, mpn_cnd_neg, mpn_cnd_swap, mpn_sec_eq_ui, which would
probably have to be written in assembly to ensure that they avoid
operations with branches or data-dependent timing.
This version returns a success/failure indication, with failure if gcd
(a,m) != 1. Not sure if is an important feature, but on the mpz level I
think it would be nice with an mpz_sec_invert with an interface similar
to mpz_invert.
The actual algorithm is novel. I did some search before I came up with
this method almost a year ago, and I didn't find anything. It's based on
similar ideas as gmp's gcd_1, with additional logic to arrange it to
divide out only a single factor of two in each iteration (and not a
varying *power* of two, as in the standard binary gcd algorithm).
There's now one paper referencing the algorithm,
http://link.springer.com/chapter/10.1007/978-3-642-40588-4_10 (preprint:
http://conradoplg.cryptoland.net/files/2010/12/mocrysen13.pdf)
And its some 10 times slower than mpn_gcdext, using Lehmer's algorithm.
Question is, can we get this, and related functions, in release-shape
including tests and documentation, in a couple of weeks?
Regards,
/Niels
-------------- next part --------------
/* mpn_sec_minvert
Contributed to the GNU project by Niels MÃ¶ller
Copyright 2013 Free Software Foundation, Inc.
This file is part of the GNU MP Library.
The GNU MP Library is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation; either version 3 of the License, or (at your
option) any later version.
The GNU MP Library is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public
License for more details.
You should have received a copy of the GNU Lesser General Public License
along with the GNU MP Library. If not, see https://www.gnu.org/licenses/. */
#include "gmp.h"
#include "gmp-impl.h"
static mp_limb_t
mpn_sec_add_1 (mp_limb_t *rp, mp_limb_t *ap, mp_size_t n, mp_limb_t b)
{
mp_size_t i;
for (i = 0; i < n; i++)
{
mp_limb_t r = ap[i] + b;
b = (r < b);
rp[i] = r;
}
return b;
}
static void
mpn_cnd_neg (int cnd, mp_limb_t *rp, const mp_limb_t *ap, mp_size_t n)
{
mp_limb_t cy = (cnd != 0);
mp_limb_t mask = -cy;
mp_size_t i;
for (i = 0; i < n; i++)
{
mp_limb_t r = (ap[i] ^ mask) + cy;
cy = r < cy;
rp[i] = r;
}
}
static void
mpn_cnd_swap (int cnd, mp_limb_t *ap, mp_limb_t *bp, mp_size_t n)
{
mp_limb_t mask = - (mp_limb_t) (cnd != 0);
mp_size_t i;
for (i = 0; i < n; i++)
{
mp_limb_t a, b, t;
a = ap[i];
b = bp[i];
t = (a ^ b) & mask;
ap[i] = a ^ t;
bp[i] = b ^ t;
}
}
static int
mpn_sec_eq_ui (mp_srcptr ap, mp_size_t n, mp_limb_t b)
{
mp_limb_t d;
ASSERT (n > 0);
d = ap[0] & b;
while (n-- > 0)
d |= ap[n];
return d == 0;
}
mp_size_t
mpn_sec_minvert_itch (mp_size_t n)
{
return 3*n;
}
/* Compute V <-- A^{-1} (mod M), in data-independent time. M must be
odd. Returns 1 on success, and 0 on failure (i.e., if gcd (A, m) !=
1). Inputs and outputs of size n, and no overlap allowed. The {ap,
n} area is destroyed. For arbitrary inputs, bit_size should be
2*n*GMP_NUMB_BITS, but if A or M are known to be smaller, e.g., if
M = 2^521 - 1 and A < M, bit_size can be any bound on the sum of
the bit sizes of A and M. */
int
mpn_sec_minvert (mp_ptr vp, mp_ptr ap, mp_srcptr mp,
mp_size_t n, mp_bitcnt_t bit_size,
mp_ptr scratch)
{
ASSERT (n > 0);
ASSERT (bit_size > 0);
ASSERT (mp[0] & 1);
ASSERT (! MPN_OVERLAP_P (ap, n, vp, n));
#define bp scratch
#define up (scratch + n)
#define m1hp (scratch + 2*n)
/* Maintain
a = u * orig_a (mod m)
b = v * orig_a (mod m)
and b odd at all times. Initially,
a = a_orig, u = 1
b = m, v = 0
*/
up[0] = 1;
mpn_zero (up+1, n - 1);
mpn_copyi (bp, mp, n);
mpn_zero (vp, n);
ASSERT_CARRY (mpn_rshift (m1hp, mp, n, 1));
ASSERT_NOCARRY (mpn_sec_add_1 (m1hp, m1hp, n, 1));
while (bit_size-- > 0)
{
mp_limb_t odd, swap, cy;
/* Always maintain b odd. The logic of the iteration is as
follows. For a, b:
odd = a & 1
a -= odd * b
if (underflow from a-b)
{
b += a, assigns old a
a = B^n-a
}
a /= 2
For u, v:
if (underflow from a - b)
swap u, v
u -= odd * v
if (underflow from u - v)
u += m
u /= 2
if (a one bit was shifted out)
u += (m+1)/2
As long as a > 0, the quantity
(bitsize of a) + (bitsize of b)
is reduced by at least one bit per iteration, hence after
(bit_size of orig_a) + (bit_size of m) - 1 iterations we
surely have a = 0. Then b = gcd(orig_a, m) and if b = 1 then
also v = orig_a^{-1} (mod m)
*/
assert (bp[0] & 1);
odd = ap[0] & 1;
swap = mpn_cnd_sub_n (odd, ap, bp, n);
mpn_cnd_add_n (swap, bp, ap, n);
mpn_cnd_neg (swap, ap, ap, n);
mpn_cnd_swap (swap, up, vp, n);
cy = mpn_cnd_sub_n (odd, up, vp, n);
cy -= mpn_cnd_add_n (cy, up, mp, n);
ASSERT (cy == 0);
cy = mpn_rshift (ap, ap, n, 1);
ASSERT (cy == 0);
cy = mpn_rshift (up, up, n, 1);
cy = cnd_add_n (cy, up, m1hp, n);
ASSERT (cy == 0);
}
/* Should be all zeros, but check only extreme limbs */
ASSERT ( (ap[0] | ap[n-1]) == 0);
/* Check if indeed gcd == 1. */
return mpn_sec_cmp_ui (bp, n, 1);
#undef bp
#undef up
#undef m1hp
}
-------------- next part --------------
--
Niels Möller. PGP-encrypted email is preferred. Keyid C0B98E26.
Internet email is subject to wholesale government surveillance.
More information about the gmp-devel
mailing list