[rust-dev] Integer overflow, round -2147483648

Daniel Micay danielmicay at gmail.com
Mon Jun 23 12:50:36 PDT 2014


On 23/06/14 03:15 PM, John Regehr wrote:
>> Using checked overflow will
>> reduce the performance of most code with non-trivial usage of integer
>> arithmetic by 30-70%.
> 
> No, this view is overly pessimistic.

My numbers are based on real measurements, and are accurate. You can
pick and choose different benchmarks where the performance hit isn't as
bad, but it doesn't make my numbers overly pessimistic. The performance
hit will depend a lot on the architecture, and microbenchmarks can't
measure the cost of the bloated code because it will all fit in the L1
cache regardless.

> The last time we checked, Clang with the integer sanitizer turned on had
> a little less than 30% overhead for SPEC CINT 2006, on average.  Here
> are the actual slowdowns:
> 
>   400.perlbench       42.8%
>   401.bzip2           44.4%
>   403.gcc             12.7%
>   429.mcf             11.3%
>   445.gobmk           42.0%
>   456.hmmer           36.5%
>   458.sjeng           36.7%
>   462.libquantum      36.9%
>   464.h264ref         122.0%
>   471.omnetpp         4.8%
>   473.astar           16.1%
>   483.xalancbmk       12.4%
>   433.milc            22.7%
>   444.namd            15.5%
>   447.dealII          52.5%
>   450.soplex          17.5%
>   453.povray          11.0%
>   470.lbm             13.3%
>   482.sphinx3         34.3%
> 
> This was on some sort of Core i7.

It will be significantly worse on ARM and older x86 CPUs. A modern x86
CPU eliminates much of the overhead from the branches themselves, but
ARM CPUs are much worse at this and there are plenty of in-order CPUs
without the ability to do this at all.

Another issue is that even with checked arithmetic provided by the
architecture, Rust couldn't produce code with those instructions by
default because it targets the baseline architecture.

> Now consider that:
> 
> - This isn't only checking for signed overflows, it's checking for lossy
> casts, shift past bitwidth, etc. -- the average overhead goes down to
> 20% if we only check for C/C++ undefined behaviors

The discussion here is about checking for both signed / unsigned integer
overflow, as in passing both `-fsanitize=signed-integer-overflow` and
`-fsanitize=unsigned-integer-overflow`. Rust has defined signed overflow
already so it doesn't make sense to just check for that.

> - LLVM does a crap job in removing overflow checks; there's a ton of
> room for improvement, and I believe this will start happening now due to
> Swift

I doubt it, since Swift has a high level IR above LLVM IR and the
implementation isn't open-source. The language-specific optimizations
like removing overflow / bounds checks based on type system rules will
almost certainly be done on the high-level SIL IR, not at the LLVM IR
layer where most of the information is already lost.

Rust 1.0 will be released in about 6 months, and these improvements
aren't going to happen in that time. It's a language for the present,
not one written for a fantasy architecture / compiler backend in 2025.

It's not going to be the last programming language, and hurting it in
the present based on biased predictions of the future is only going to
result in no one using the language.

If there was really a ton of low-hanging fruit, I expect it would have
been significantly improved by now. A claim that there's a lot of room
for improvement isn't worth anything. A working implementation is the
only thing that matters, and it needs to retain the good compile-time.

> - We designed the integer sanitizer to be a debugger, not a production
> tool, it has precise exception semantics which suppresses a lot of
> integer optimizations; a more relaxed exception model like AIR/Ada would
> permit most of LLVM's integer optimizations to keep working

I don't believe that LLVM will be capable of optimizing away most of the
overhead either way. LLVM is pretty much just an inlining machine with
good x86 code generation and register allocation. It isn't even capable
of eliminating a null check when the proof that it's null is in another
basic block because the value propagation is so incredibly bad.

The optimization capabilities of LLVM are greatly exaggerated. There's
essentially no interprocedural optimization or non-trivial alias
analysis, and it's next to impossible to preserve the high level type
system invariants.

We've made the mistake of making assumptions about LLVM's capabilities
before, by depending on optimizations that are actually implemented
(unlike the ones discussed here) but don't work in edge cases. Those
edge cases are surprisingly common, and Rust is often significantly
slower than C because some of the basic abstractions like iterators
depend on reasonable value propagation.

Rust is not a language designed for an imaginary sufficiently smart
compiler. It targets real architectures and the real LLVM backend. The
only numbers that matter are the ones you can measure, and those numbers
aren't going to drastically change in the 6 months before the 1.0 release.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <http://mail.mozilla.org/pipermail/rust-dev/attachments/20140623/51e2dcf5/attachment.sig>


More information about the Rust-dev mailing list