[rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

Daniel Micay danielmicay at gmail.com
Sat Jan 11 11:43:25 PST 2014


On Sat, Jan 11, 2014 at 1:18 AM, Brian Anderson <banderson at mozilla.com> wrote:
> On 01/10/2014 10:08 PM, Daniel Micay wrote:
>>
>> On Sat, Jan 11, 2014 at 1:05 AM, Huon Wilson <dbau.pp at gmail.com> wrote:
>>>
>>> On 11/01/14 16:58, Isaac Dupree wrote:
>>>>
>>>> Scheme's numeric tower is one of the best in extant languages.  Take a
>>>> look at it.  Of course, its dynamic typing is poorly suited for Rust.
>>>>
>>>> Arbitrary-precision arithmetic can get you mathematically perfect
>>>> integers
>>>> and rational numbers, but not real numbers.  There are an uncountably
>>>> infinite number of real numbers and sophisticated computer algebra
>>>> systems
>>>> are devoted the problem (or estimates are used, or you become unable to
>>>> compare two real numbers for equality).  The MPFR C library implements
>>>> arbitrarily high precision floating point, but that still has all the
>>>> pitfalls of floating-point that you complain about. For starters, try
>>>> representing sqrt(2) and testing its equality with e^(0.5 ln 2).
>>>>
>>>> In general, Rust is a systems language, so fixed-size integral types are
>>>> important to have.  They are better-behaved than in C and C++ in that
>>>> signed
>>>> types are modulo, not undefined behaviour, on overflow.  It could be
>>>> nice to
>>>> have integral types that are task-failure on overflow as an option too.
>>>
>>>
>>> We do already have some Checked* traits (using the LLVM intrinsics
>>> internally), which let you have task failure as one possibility on
>>> overflow.
>>> e.g. http://static.rust-lang.org/doc/master/std/num/trait.CheckedAdd.html
>>> (and Mul, Sub, Div too).
>>
>> I don't think failure on overflow is very useful. It's still a bug if
>> you overflow when you don't intend it. If we did have a fast big
>> integer type, it would make sense to wrap it with an enum heading down
>> a separate branch for small and large integers, and branching on the
>> overflow flag to expand to a big integer. I think this is how Python's
>> integers are implemented.
>
>
> I do think it's useful and is potentially a good compromise for the
> performance of the default integer type. Overflow with failure is a bug that
> tells you there's a bug. Wrapping is a bug that pretends it's not a bug.

This is why `clang` exposes sanitize options for integer overflow. You
could use `-ftrapv` in production... but why bother using C if you're
taking a significant performance hit like that?

> Hitting a slow path unexpectedly on overflow seems to me like a recipe for
> unpredictable performance, which doesn't seem inline with Rust's usual
> goals.

It's certainly better than the process exiting, which is what's going
to happen in real systems when failure occurs. Either that, or they're
going to lose a bunch of data from the task it caused to unwind. The
only way to make overflow not a bug is to expand to a big integer or
use a big integer from the start.


More information about the Rust-dev mailing list