Floating point design

delta trinity deltatrinity@hotmail.com
Fri, 27 Jun 2003 12:28:09 -0400

Just an taughtfull opinion!

>From: LingWitt@insightbb.com
>Make it an option.
>On Thursday, Jun 26, 2003, at 18:45 America/New_York, Kevin Ryde wrote:
>>>Its seems like the floating point should be defined by 3
>>>multi-precision integers for the integer portion, decimal portion,
>>There's never any need to represent those separately, a single value
>Even so, having the dynamic growth of the integer would be fantastic.
Actually, having separate structs for integer and decimal would probably 
make it slower.  For example, multiplying by x, you would need to multiply 
the fraction by x, multiply the integer by x, take the overflow and add it 
to the integer, remove the overflow from the decimal part.  This is easy 
with x being an integer but may get more complicated with x being a float 

Normally, the way floating points work, you have number represented as a 
single stream of bits, with only one bit on the left side of the decimal 
point.  It's like having 34567 represented as 3.4567E4 (that is, 3.4567 x 
10^4).  Also, since the value is represented in bits, the digit on the 
left-side of the decimal point is always 1 (except, that is, for the number 
'0').  So, many floating points system (IEEE, GMP?), simply omit (remove) it 
from the representation, to save a bit.

So, once you have floating point number represented as a stream of bit like 
that, doing operation is very easy.  For multiplication, you simply multiply 
the bits togetter and add the exponents togetter.  For addition, you first 
shift the smallest number's mantissa bits to the right, by the difference 
between the two exponents (ex: you have one exponent being 13 and the other 
being 10, you shift-right the smallest number's mantissa bits by 3, so the 
smallest number exponent in now 13) and add togetter.  And adjust exponent 
by 1 if needed.  All this taking into account for the '1' that was removed 
(remember the note earlier).

It might sound a bit complicated but it's much easier than taking into 
account for a separate integer and decimal parts.

>>>and exponent
>>We believe there's no need for a multi-precision exponent, it would
>>only be a slowdown.
>Your beliefs are inconsequential. Your slogan "Arithmetic without 
>limitations" is sorely hurt.
Well, I guess it would be possible to represent an exponent as 
multi-precision.  But is it really practicle?  One of the goal is to make 
GMP as fast as possible.  Having to take into account for a mp exponent 
would probably make GMP slower.  Plus, when will you really have to 
represent such a number?

I could see fractal applications, where you could 'zoom-in' indefinitely.  
But there, for the fractal algorithm to be able to show valid result under a 
very very small exponent, you would somewhere need to keep track of a very 
large precision mantissa (much larger than practical memory and computation 
speed limit).

For astronomical values?  Well, taking into account that the estimated 
number of electrons in the universe is 10^79, we can go a long way before 
the need to represent something larger than 2^(2^32).

>>>since the integer type seems to reallocate itself as necessary
>>We believe there's no call to reallocate floats, an application will
>>want a particular precision and normally after just a few operations
>>the full size will be filled with data.
>Your beliefs are inconsequential. I have data that needs to expand within 
>the limits of the memory.

Expanding a float would probably not be benifical.  Unless the floating 
point decimal part is a power of 2 (that is, 0.5 (2^-1), 0.25 (2^-2), 0.125 
(2^-3), ... or a sum of those powers (ex: 0.75 that is 0.5 + 0.75)), it will 
be represented as an infinite stream of bits (that is, it is an irrational 
or periodic number under base 2).  So, having to represent that number in a 
limited amount of space (limited by the floating point number precision), 
those numbers can not be represented with exact precision.  Now, this said, 
expanding that number precision won't make the number any more precise since 
it will be zero-padded.  It's like having, in decimal, 3.33333333...  If you 
represent this number as exactly 3.3333 (that is, 5 decimal digits 
precision), and, afterward, you increase precision to 10 digits, you'll only 
get 3.333300000.  GMP have no way to 'guess' what the other digits are, 
especially if the number is an irrational (could it guess that 3.142 could 
expand to 3.14159?).  So, expanding floating points numbers would make no 
sense, if for example, you would want to add the 10-digit precision 
'0.000000001' to the 5-digit precision '3.3333', which would give 
3.333300001.  Conclusion, you generally can't assume that the precision will 
be higher if you expand the precision once it's been set.

>>>Also, why aren't the C++ classes implemented as concrete classes?
>>See "C++ Interface Internals" in the manual.  You need to read the
>>manual before asking the list.
>It seems like inline functions would do the trick.


Tired of spam? Get advanced junk mail protection with MSN 8.