OT: PHP 32 bit numbers security issue
Uri Even-Chen
uri at speedy.net
Thu Jan 6 11:05:04 IST 2011
On Thu, Jan 6, 2011 at 00:31, Nadav Har'El <nyh at math.technion.ac.il> wrote:
> It is pointless to make such generalizations, that speed of numeric
> calculation is no longer important. Many applications, including video
> encoding/decoding, games, and much more, basically do calculations in a
> tight loop, and they simply don't need 1024 bits (let alone unlimited)
> precision. They want to have a certain precision, and perform calculations
> fast and with low energy requirements.
>
> What I do agree with you, though, is that there is no longer a reason why
> modern languages should not have built-in unlimited precision integers
> ("bigints") as an *option* in addition to the regular faster types like
> "int", "long", etc. Once upon a time, adding a "bigint" library to a
> language meant that the compiler was bigger, the library was bigger, and
> the book was thicker. With todays gigantic software, nobody cares about
> these things any more.
>
> The question of unlimited precision real numbers (aka "floating point")
> is more complicated - how will you represent simple fractions (like 1/3)
> whose base-10 expansion is infinite? What will you do about results of
> math functions (e.g., log(), sin(), sqrt() etc.) that are irrational?
> Will your number system also start supporting simple fractions and symbolic
> formulas to retain the perfect precision for as long as possible? Pretty
> soon you'll end up with Matematica (http://en.wikipedia.org/wiki/Mathematica)..
> I think a more sensible approach for real numbers is something like what
> "bc" does, i.e., support an arbitrary, but pre-determined, precision.
I think letting the user define the number of bits used for accuracy
and exponent is a good start. If I need 256 bits for accuracy it
would be fine, but defining 1,000,000 for accuracy should work too
(even if slower). By the way, I read somebody calculated the first
trillion (1,000,000,000,000) decimal digits of the number e!
[http://www.numberworld.org/digits/E/] and 5 trillion digits of pi!
[http://www.numberworld.org/misc_runs/pi-5t/details.html]
If there is special need for 100% accuracy, rational numbers can be
represented by the fraction of two integers, for example 1/3 will be
represented as 1/3 and 0.1 will be represented as 1/10. All the main
operations and comparisons can be done with this rational
representation.
If you need to calculate an irrational function, there is no choice
but to specify the accuracy you need. In this case you can use
floating point, or the rational representation - but the result will
not be 100% accurate. But even when using floating point, there are
ways to prevent results such as 2.00000000000000001 from appearing,
this happened to me for example in Java (using eclipse) when I
calculated addition of decimal numbers, such as 0.01, which don't have
a finite binary representation in floating point. A good compiler can
prevent this, by rounding numbers to less digits than the accuracy
they are represented.
By the way, although I know hardware can be used to calculate floating
point operations, I would prefer to use software - because of the
ability to be flexible and let the user or programmer define the
number of bits necessary for accuracy and the exponent. I think
calculating integer operations is enough for hardware - the rest can
be done by software.
Uri Even-Chen
Mobile Phone: +972-50-9007559
E-mail: uri at speedy.net
Website: http://www.speedy.net/
More information about the Linux-il
mailing list