[rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

Daniel Micay danielmicay at gmail.com
Sat Jan 11 15:14:50 PST 2014


On Sat, Jan 11, 2014 at 6:06 PM, Nathan Myers <ncm at cantrip.org> wrote:
> On 01/10/2014 10:08 PM, Daniel Micay wrote:
>>
>> I don't think failure on overflow is very useful. It's still a bug if
>> you overflow when you don't intend it. If we did have a fast big
>> integer type, it would make sense to wrap it with an enum heading down
>> a separate branch for small and large integers, and branching on the
>> overflow flag to expand to a big integer. I think this is how Python's
>> integers are implemented.
>
> Failure on overflow *can* be useful in production code, using
> tasks to encapsulate suspect computations.  Big-integer types
> can be useful, too.  A big-integer type that uses small-integer
> arithmetic until overflow is a clever trick, but it's purely
> an implementation trick.  Architecturally, it makes no sense
> to expose the trick to users.

I didn't suggest exposing it to users. I suggested defining a wrapper
around the big integer type with better performance characteristics
for small integers.

> The fundamental error in the original posting was saying machine
> word types are somehow not "CORRECT".  Such types have perfectly
> defined behavior and performance in all conditions. They just
> don't pretend to model what a mathematician calls an "integer".
> They *do* model what actual machines actually do. It makes
> sense to call them something else than "integer", but "i32"
> *is* something else.

Rings, fields and modular arithmetic are certainly very real
mathematical concepts. Unsigned fixed-size integers behave as a
mathematician would model them, while signed ones do not really have
sane high-level semantics.

> It also makes sense to make a library that tries to emulate
> an actual integer type.  That belongs in a library because it's
> purely a construct: nothing in any physical machine resembles
> an actual integer.  Furthermore, since it is an emulation,
> details vary for practical reasons. No single big-integer or
> overflow-trapping type can meet all needs. (If you try, you
> fail users who need it simple.)  That's OK, because anyone
> can code another, and a simple default can satisfy most users.

What do you mean by default? If you don't know the bounds, a big
integer is clearly the only correct choice. If you do know the bounds,
you can use a fixed-size integer. I don't think any default other than
a big integer is sane, so I don't think Rust needs a default inference
fallback.


More information about the Rust-dev mailing list