[rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

Daniel Micay danielmicay at gmail.com
Sun Jan 12 17:22:43 PST 2014


On Sun, Jan 12, 2014 at 8:01 PM, Robert O'Callahan <robert at ocallahan.org> wrote:
> On Sun, Jan 12, 2014 at 12:59 PM, Patrick Walton <pcwalton at mozilla.com>
> wrote:
>>
>> On 1/10/14 10:08 PM, Daniel Micay wrote:
>>>
>>> I don't think failure on overflow is very useful. It's still a bug if
>>> you overflow when you don't intend it.
>>
>>
>> Of course it's useful. It prevents attackers from weaponizing
>> out-of-bounds reads and writes in unsafe code.
>
>
> Yes. And as a browser developer, I still want trap-on-overflow by default in
> the browser if it can be cheap. Overflowing integer coordinates can lead to
> infinite loops and incorrect layout or rendering, the latter of which can
> occasionally have security implications. Task failure is better than both of
> those. Generally, the sooner we detect bugs and fail the more robust we will
> be against malicious input. Being able to harden the code against a common
> class of bugs without making the language any more complicated is very
> attractive to me.

-fsanitize=signed-integer-overflow: Signed integer overflow, including
all the checks added by -ftrapv, and checking for overflow in signed
division (INT_MIN / -1).

Why not measure the impact of this on Firefox performance? We'll have
a concrete answer about half of the picture (but not about the cost
for unsigned or checks on overlong shifts and for division by zero).

> Daniel's points about cost are interesting but there's a lot of things that
> could be tried before declaring the problem intractable. Since most Rust
> side effects commute with task failure, you could do a lot of trap code
> motion and coalescing. The absence of overflow lets the compiler reason more
> effectively about arithmetic, benefiting optimizations such as array bounds
> check elimination. Range analysis becomes very important so you want work at
> it harder. Etc.

Inter-procedural optimization in LLVM can only eliminate dead code,
propagate constants, inline/merge functions and bubble up effects.

As far as I know, doing more takes way too long. Eliminating array
bounds checks and reasoning about arithmetic just doesn't really
happen.

The best hope for an inner loop is for loop-vectorize/slp-vectorize to
do their work, and they won't if there are overflow/carry checks.

LLVM is designed to optimize C code, and deviating from that kind of
code generation without losing a lot of performance means doing your
own optimizations on your own intermediate format like Haskell.


More information about the Rust-dev mailing list