[rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

Gregory Maxwell gmaxwell at gmail.com
Sun Jan 12 17:19:06 PST 2014


On Sat, Jan 11, 2014 at 5:49 PM, Nathan Myers <ncm at cantrip.org> wrote:
> No one would complain about a built-in "i128" type.  The thing
> about a fixed-size type is that there are no implementation
> choices to leak out.  Overflowing an i128 variable is quite
> difficult, and 128-bit operations are still lots faster than on
> any variable-precision type. I could live with "int" == "i128".

It's certainly harder to overflow a 128 bit type by accident, though I think you
over-state it: iterating foo *= foo 128 times will overflow even if
foo is initially
just 1, but moreover overflow that arises by chance is not the only
sort programs
have to deal with: malicious parties trigger overflow to cause
unexpected behavior.

I think a bigger argument for range checked types is not related to how easy
the overflow is to trigger but, rather, how likely it is that the
software cannot actually
handle it when it is triggered— the same problem with null pointers.
In C a lot of
software is not null safe, its difficulty to tell at a glance if any
particular function
is null safe, and difficult to tell if a function needs to be null
safe, and (apparently)
difficult for many to keep null safety in mind when writing software.
"Surprise! this
value can be null! Hope you weren't going to deference it!" is a lot
like "Surprise!
these integers can have crazy values! Hope you weren't going to square
one of them".

[Not that having a 128 bit type wouldn't be great since x86 has the
lower and upper
multiplies having a 128 bit type makes it easy to get access to
64x64->128 without
using assembly. I just don't think adding larger types really makes a meaningful
improvement with respect to safety... and the overhead is equal or
higher than having
a 64 bit type and a tag that indicates the the stored value is really
a pointer to a
multiprecision integer.]


More information about the Rust-dev mailing list