IDE support?

Brendan Eich brendan at mozilla.com
Tue Sep 13 07:48:00 PDT 2011


On Sep 13, 2011, at 5:33 AM, Andreas Rossberg wrote:

> On 13 September 2011 09:33, Brendan Eich <brendan at mozilla.com> wrote:
>> 
>> You are simply way out of date on JS optimizing VMs, which (based on work done with Self and Smalltalk) all now use "hidden classes" aka shapes and polymorphic inline caching to optimize to exactly the pseudo-assembly you show, prefixed by a short (cheap if mispredicted) branch.
>> 
>> What's more, SpiderMonkey bleeding edge does semi-static type inference, which can eliminate the guard branch.
>> 
>> Please don't keep repeating out of date information about having to "seek through a dictionary". It simply isn't true.
> 
> True. On the other hand, all the cleverness in today's JS VMs neither
> comes for free, nor can it ever reach the full performance of a typed
> language.

The AS3 => ES4 argument for optional types. Been there, done that.


> * There are extra costs in space and time to doing the runtime analysis.
> * Compile time is runtime, so there are severe limits to how smart you
> can afford to get in a compiler.

These are bearable apples to trade against the moldy oranges you'd make the world eat by introducing type annotations to JS. Millions of programmers would start annotating "for performance", i.e., gratuitously, making a brittle world at high aggregate cost.

The costs born in browsers by implementors and (this can hit users, but it's marginal) at runtime when evaluating code are less, I claim.


> * A big problem is predictability, it is a black art to get the best
> performance out of contemporary JS VMs.

This is the big one in my book. Optimization faults happen. But can we iterate till flat?


> * The massive complexity that comes with implementing all this affects
> stability.

This one I'm less sympathetic to, since we won't get rid of untyped JS up front. A sunk cost fallacy? If we could make a "clean break" (ahem), sure. Otherwise this cost must be paid.


> * Wrt limits, even in the ideal case, you can only approximate the
> performance of typed code -- e.g. for property access you have at
> least two memory accesses (type and slot) plus a comparison and
> branch, where a typed language would only do 1 memory access.

That's *not* the ideal case. Brian Hackett's type inference work in SpiderMonkey can eliminate the overhead here. Check it out.


> * Type inference might mitigate some more of these cases, but will be
> limited to fairly local knowledge.

s/might/does/ -- why did you put type inference in a subjunctive mood? Type inference in SpiderMonkey (Firefox nightlies) is not "local".


> * Omnipresent mutability is another big performance problem in itself,
> because most knowledge is never stable.

Type annotations or (let's say) "guards" as for-all-time monotonic bounds on mutation are useful to programmers too, for more robust programming-in-the-large. That's a separate (and better IMHO) argument than performance. It's why they are on the Harmony agenda.


> So despite all the cool technology we use these days, it is safe to
> assume that we will never play in the performance league of typed
> languages. Unless we introduce real types into JS, of course. :)

Does JS need to be as fast as Java? Would half as fast be enough?

/be


More information about the es-discuss mailing list