[rust-dev] Why focus on single-consumer message passing?

Vladimir Lushnikov vladimir at slate-project.org
Sun Jan 26 07:00:46 PST 2014


Here are a couple of observations/comments from a rust lurker:

* +1 for message-passing as a core paradigm for inter-thread communication.
It is significantly easier to reason about than shared memory. It is not a
silver bullet for all cases of course (but that is why you have unsafe
code):
   ** How do you make asynchronous replies behave in an obvious manner
(without blocking). Akka's approach with Futures offers one possible
solution (see Composing Futures section of
http://doc.akka.io/docs/akka/snapshot/scala/futures.html) but I haven't had
a go at seeing if something like this will work in Rust
   ** How do you group these asynchronous request/reply into synchronous
blocks? I.e. how do you turn groups of asynchronous operations involving
many tasks into a single "transaction" (or monad)? Akka for example does
not support this very well (Transactors offer no way of replying to data).
And anyway we cannot implement STM in the rust core library I think
* Has anyone taken a look at the primitives for inter-task communication
that ZMQ (http://zeromq.org/) offers? Their different alternatives for N-N
communication are particularly powerful in practice. And their API makes a
lot of sense.
* +1 to the Sender/Receiver nomenclature (vs. Sink/Source)
* +1 to context objects, completely agree with Patrick's reasoning

Vladimir


On Sun, Jan 26, 2014 at 2:46 PM, Patrick Walton <pcwalton at mozilla.com>wrote:

> On 1/26/14 5:13 AM, Daniel Micay wrote:
>
>> I think shared memory is widely applicable and easy to use too. It's
>> often much harder to use message passing.
>>
>
> Message passing has been proven time and time again to be among the
> easiest forms of concurrency to understand and use. You don't have to look
> any further than the list of the most popular concurrent languages.
>
>
>  Rust is making a lot of semantic sacrifices for the sake of
>> performance. I see most of it as being under the assumption that it
>> will eventually perform well rather than being based on hard numbers.
>>
>
> This is either:
>
> 1. An implication that we are not benchmarking, which is plainly false.
>
> 2. A denial that sometimes getting good performance requires a
> sophisticated implementation that takes time to get right. This is also
> false. Look at JavaScript engines, for example. You don't get good
> performance without at least two JITs (one non-SSA, one SSA) and possibly
> an interpreter. (And this is not because of JavaScript's complexity;
> LuaJIT's tracing is awfully complex too.) In these situations, the only way
> to know whether you're heading down the right path is to actually spend the
> time to optimize the implementation. Until that is done, stop energy (such
> as your constant stop energy against M:N threading) is entirely unhelpful.
>
>
>  It's significantly (20-30%+)
>> slower in the case where the condition variable is being hit a lot due
>> to imbalance in consumption/production.
>>
>
> Then why not expose a work-stealing queue instead?
>
> Patrick
>
>
> _______________________________________________
> Rust-dev mailing list
> Rust-dev at mozilla.org
> https://mail.mozilla.org/listinfo/rust-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/rust-dev/attachments/20140126/a06d07dd/attachment.html>


More information about the Rust-dev mailing list