Performance concern with let/const

Domenic Denicola domenic at domenicdenicola.com
Mon Sep 17 16:12:21 PDT 2012


>>> 2. The stated goal of 'let' is to replace 'var' in common usage (and if this is not the goal, we should not be adding 'let'). 
> 
>> There is actually some disagreement about that statement of the goal.  The goal of let is to provide variable that are scoped to the block level.  That is the significant new semantics that is being added.  The slogan-ism isn't the goal.
> 
> This strikes at a critical piece of the discussion around 'let'.  Adding a new fundamental block scoped binding form ('let') has a very significant conceptual cost to the language.  If it is not the expectation of the committee that new code will nearly universally adopt 'let' instead of 'var', and that books will be able to state 'use let instead of var', then I think that brings into question whether 'let' is still passing the cost/value tradeoff.  This tradeoff gets increasingly weaker as additional performance overheads are entered into the cost bucket.

To provide a (admittedly single) developer perspective: let/const are attractive because they bring us closer to eliminating the confusion inherent in hoisting and achieving the same semantics as C-family languages. Although it seems that disallowing use before declaration is not possible, hoisting to block-level plus TDZ checks for the intermediate code gives a reasonable approximation, at least assuming I've understood the proposals and email threads correctly.

There are also a number of auxiliary benefits like the fresh per-loop binding and of course const optimizations/safeguards (which eliminate the need for a dummy object with non-writable properties to store one's constants).

Personally in the grand scheme of things even a 5% loss of speed is unimportant to our code when weighed against the value of the saner semantics proposed. We would immediately replace all vars with let/const if we were able to program toward this Chakra prototype (e.g. for our Windows 8 app).

I am almost hesitant to bring up such an obvious argument but worrying about this level of optimization seems foolhardy in the face of expensive DOM manipulation or async operations. Nobody worries that their raw JS code will run 5% slower because people are using Chrome N-1 instead of Chrome N. Such small performance fluctuations are a fact of life even with ES5 coding patterns (e.g. arguments access, getters/setters, try/catch, creating a closure without manually hoisting it to the outermost applicable level, using array extras instead of for loops, …). If developers actually need to optimize at a 5% level solely on their JS they should probably consider LLJS or similar.

That said I do understand that a slowdown could make the marketing story harder as not everyone subscribes to my views on the speed/clarity tradeoff.


More information about the es-discuss mailing list