mpn_sec_div_r
Torbjörn Granlund
tg at gmplib.org
Sun Nov 25 19:19:34 UTC 2018
nisse at lysator.liu.se (Niels Möller) writes:
==28982== Conditional jump or move depends on uninitialised value(s)
==28982== at 0x493A982: __gmpn_sec_div_r (in /usr/lib/x86_64-linux-gnu/libgmp.so.10.3.2)
==28982== Use of uninitialised value of size 8
==28982== at 0x493C07E: __gmpn_invert_limb (in /usr/lib/x86_64-linux-gnu/libgmp.so.10.3.2)
==28982== by 0x493AA20: __gmpn_sec_div_r (in /usr/lib/x86_64-linux-gnu/libgmp.so.10.3.2)
I think it's all about the high end of the divisor. In
mpn/generic/sec_div.c, we have
d1 = dp[dn - 1];
count_leading_zeros (cnt, d1);
if (cnt != 0)
which is a branch depending of the most significant bit of d.
Which is fine in itself. We do NOT try to hide the number of bits in
operands.
I don't follow why this would cause use of unitialised data. Perhaps
you call the function with unitialised data? ;-)
And we also call invert_limb, where implementations typically start with
a table lookup on the most significant bits, e.g, x86_64/invert_limb.asm
mov %rdi, %rax
shr $55, %rax
ifdef(`PIC',`
ifdef(`DARWIN',`
mov mpn_invert_limb_table at GOTPCREL(%rip), %r8
add $-512, %r8
',`
lea -512+mpn_invert_limb_table(%rip), %r8
')',`
movabs $-512+mpn_invert_limb_table, %r8
')
movzwl (%r8,%rax,2), R32(%rcx) C %rcx = v0
Not sure what to do about it, but it would be desirable if mpn_sec_div*
functions couldn't leak any of the input bits.
That use of mpn_invert_limb_table is a serious oversight. One needs to
start with a single-bit value instead. (I'm sure we could do some
simple logical operation on the low bits and get at least two bits.)
--
Torbjörn
Please encrypt, key id 0xC8601622
More information about the gmp-devel
mailing list