[rust-dev] sandboxing Rust?

Daniel Micay danielmicay at gmail.com
Sat Jan 18 21:03:16 PST 2014


On Sat, Jan 18, 2014 at 10:30 PM, Scott Lawrence <bytbox at gmail.com> wrote:
> On Sat, 18 Jan 2014, Corey Richardson wrote:
>
>> Rust's safety model is not intended to prevent untrusted code from
>> doing evil things.
>
>
> Doesn't it succesfully do that, though? Or at least with only a small amount
> of extra logic? For example, suppose I accept, compile, and run arbitrary
> rust code, with only the requirement that there be no "unsafe" blocks
> (ignore for a moment the fact that libstd uses unsafe). Barring compiler
> bugs, I think it's then guaranteed nothing bad can happen.

Even a small subset of Rust hasn't been proven to be secure. It has
plenty of soundness holes left in the unspoken specification. It will
eventually provide a reasonable level of certainty that you aren't
going to hit one of these issues just writing code, but it's not even
there yet.

> It seems to me that (as usual with languages like Rust) it's simply a mildly
> arduous task of maintaining a parallel libstd implementation to be used for
> sandboxing, which either lacks implementations for dangerous functionality,
> or has them replaced with special versions that perform correct permissions
> checking. That, coupled with forbidding unsafe blocks in submitted code,
> should solve the problem.

You'll need to start with an implementation of `rustc` and `LLVM` free
of known exploitable issues. Once the known issues are all fixed, then
you can start worrying about *really* securing them against an
attacker who only needs to find a bug on one line of code in one
poorly maintained LLVM pass. Even compiling untrusted code with LLVM
without running it is a very scary prospect.

> I could be completely wrong. (Is there some black magic I don't know?)

Yes, you're completely wrong. This kind of thinking is dangerous and
how we ended up in the mess where everyone is using ridiculously
complex and totally insecure web browsers to run untrusted code
without building a very simple trusted sandbox around it. Many known
exploits been discovered every year, and countless ones kept private
by entities like nation states and organized crime.

The language isn't yet secure and the implementation is unlikely to
ever be very secure. LLVM is certainly full of many known exploitable
bugs and many more unknown ones. There are many known issues in
`rustc` and the language too.

I don't see much of a point in avoiding a process anyway. On Linux, it
close to no overhead over a thread. Giving up shared memory is an
obvious first step, and the process can be restricted to making
`read`, `write` and `exit` system calls.

The `chromium` sandbox isn't incredibly secure but it's not insane
enough to even render from the same process as where it's compiling
JavaScript. Intel open-source Linux driver is reaching the point where
an untrusted process can be allowed to use it, but it's not there yet
and any other video driver on any of the major operating systems is a
joke.

You're not going to get very far if you're not willing to start from
process isolation, and then build real security on top of it. Anyway,
the world doesn't need another Java applet.


More information about the Rust-dev mailing list