[rust-dev] Today's Rust contribution ideas
matthieu.monrocq at gmail.com
Tue Jan 28 11:07:25 PST 2014
On Mon, Jan 27, 2014 at 11:41 PM, Sebastian Sylvan <
sebastian.sylvan at gmail.com> wrote:
> On Mon, Jan 27, 2014 at 9:33 AM, Matthieu Monrocq <
> matthieu.monrocq at gmail.com> wrote:
>> On Mon, Jan 27, 2014 at 3:39 AM, Brian Anderson <banderson at mozilla.com>wrote:
>>> Consensus is that the `do` keyword is no longer pulling its weight.
>>> Remove all uses of it, then remove support from the compiler. This is a 1.0
>>> # Experiment with faster hash maps (#11783)
>>> Rust's HashMap uses a cryptographically secure hash, and at least partly
>>> as a result of that it is quite slow. HashMap continues to show up very,
>>> very high in performance profiles of a variety of code. It's not clear what
>>> the solution to this is, but it is clear that - at least sometimes - we
>>> need a much faster hash map solution. Figure out how to create faster hash
>>> maps in Rust, potentially sacrificing some amount of DOS-resistance by
>>> using weaker hash functions. This is fairly open-ended and researchy, but a
>>> solution to this could have a big impact on the performance of rustc and
>>> other projects.
>> You might be interested by a serie of articles by Joaquín M López Muñoz
>> who maintains the Boost.MultiIndex library. He did a relatively
>> comprehensive overview of the hash-maps implementation of Dirkumware
>> (MSVC), libstdc++ and libc++ on top of Boost.MultiIndex, and a lot of
>> benchmarks showing the performance for insertion/removal/search in a
>> variety of setup.
>> One of the last articles:
> Let me also plug this blog post from a while back:
> http://sebastiansylvan.com/2013/05/08/robin-hood-hashing-should-be-your-default-hash-table-implementation/. There's also a followup on improving deletions*, which makes the final
> form the fastest hash map I know of. It's also compact (95% load factor, 32
> bits overhead per element, but you can reduce that to 2 bits per element if
> you sacrifice some perf.), and doesn't allocate (other than doubling the
> size of the table when you hit the load factor).
> For a benchmark with lots of std::strings it was 23%, 66% and 25% faster
> for insertions deletions and lookups (compared to MSVC unordered_map), it
> also uses 30% less memory in that case.
> * the basic form has an issue where repeated deletes gradually increases
> the probe count. In pathological cases this can reduce performance by a
> lot. The fix is to incrementally fix up the table on each delete (you could
> also do it in batch every now and then). It's still faster in all cases,
> and the probe-length as well as probe-length-variance remains low even in
> the most pathological circumstances.
Thanks for the link, I should have mentioned that the C++ Standard version
is constrained by a memory stability requirement which may or may not apply
to Rust (thanks to borrow checks, it should be possible to know statically
whether an element is borrowed or not). This memory stability requirement
as well as some other requirements such as relative stability of items
within the same equivalence class during insert/erase several constrain the
design; and indeed if the requirements can be lifted it the designs
proposed on bannalia will be suboptimal.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Rust-dev