Sharing a JavaScript implementation across realms

Filip Pizlo fpizlo at
Wed Jan 14 13:28:19 PST 2015

I can provide some insight on this from JSC’s innards.

A realm ain’t free but it is lightweight enough that we don’t really sweat creating them.  It’s an object with some prototypes hanging off of it, most of which can be reified lazily if you’re paranoid about their overhead.  They incur some theoretical long-term overhead due to the JIT.  Even if two realms execute identical code, the JIT will optimize that code separately in the different realms.  That’s not a fundamental impasse; rather it’s just how we’ve adapted to the reality of the code we see.  We're currently not in a world where many realms all get super hot and run the same code.  One realm might get hot enough to JIT, or multiple realms may execute the same code, but multiple realms getting hot on the same code isn’t a thing yet.  If it became a thing, then we’d have some cold-hearted engineering to do.

On the other hand a worker might as well be a new process.  We end up firing up a new VM instance along with its own heap.  That heap and all of that VM’s resources grow and shrink without any interaction with the other VMs you’ve also spawned.  Having multiple things in one VM like in the realm case gives what systems guys think of as “elasticity” - if one thing suddenly doesn’t need a resource anymore and another thing simultaneously decides that it needs that same resource, you can easily have a hand-off occur.  But if your heaps are separate - and have separate GCs - then the hand-off of resources isn’t so elastic.  If one heap suddenly shrinks, then the VM will probably think that it’s best to still hold on to the underlying memory in case the heap grows again soon - it won’t have any way of knowing whether or not some other VM in the same process is actively growing its heap and needs pages.  You could achieve some elasticity by having multiple heaps that talk to each other a lot, but probably if you really cared, you’d just have one single heap under the hood.  But that’s not how it works in JSC right now, and so workers cost you memory overhead and most of that overhead is from lost elasticity.

This isn’t really how it needs to be, long term.  But because threads aren’t a thing yet in JavaScript, the VM would rather believe that none of its resources will be touched by multiple threads at the same time (other than VM-internal threads like GC and JIT), since that gives some small benefits, mostly for maintainability of the VM itself.  This implies that each worker currently needs a separate VM.

You could imagine building a multi-threaded VM and then making workers be just an illusion of isolation.  Then, workers would be much cheaper - they would be the cost of a realm (lightweight, as I say above) plus the cost of a thread (super cheap on modern-ish OSes).  But if you are willing to go to that trouble, then you might as well also change fundamental limitations of workers such as the share-almost-nothing model.


> On Jan 14, 2015, at 10:40 AM, Brendan Eich <brendan at> wrote:
> SpiderMonkey and Firefox OS people I asked about this just now say the problems are not realm-specific, rather worker-specific and implementation-specific. Best to catch up with them first and get real numbers, attributed to realm and worker separately.
> /be
> Anne van Kesteren wrote:
>> On Wed, Jan 14, 2015 at 1:28 AM, Brendan Eich<brendan at>  wrote:
>>> Before we go tl;dr on this topic, how about some data to back up the
>>> asserted problem size? Filip gently raised the question. How much memory
>>> does a realm cost in top open source engines? Fair question, empirical and
>>> (I think) not hard to answer. Burdened malloc/GC heap full cost, not net
>>> estimate from source analysis, would be best. Cc'ing Nick, who may already
>>> know. Thanks,
>> Well, I heard that for e.g. B2G we moved from JavaScript workers to
>> C++ threads due to memory constraints. It might well be that this is a
>> solvable problem in JavaScript engines with sufficient research, it's
>> just at the moment (and in the past) it's been blocking us from doing
>> certain things.
> _______________________________________________
> es-discuss mailing list
> es-discuss at

More information about the es-discuss mailing list