Performance concern with let/const

Luke Hoban lukeh at microsoft.com
Mon Sep 17 09:37:19 PDT 2012


From: Allen Wirfs-Brock [mailto:allen at wirfs-brock.com] 
>> On Sep 16, 2012, at 9:35 PM, Luke Hoban wrote:
>> 
>> As an experiment, I took the early-boyer test from V8 and changed 'var' to 'let'.  In Chrome preview builds with 'let' support, I saw a consistent ~27% slowdown.  That is, the 'let is the new var' mantra leads to 27% slower code in this example for the same functionality.  

> Without evaluating the quality of the Chrome implementation, this isn't a meaningful observation.  As the Chrome implements have stated they have not done any optimization, this actually becomes a misleading statement.  You really should just strike this assertion from your argument and start with your actual experiments as the evidence  to support your position.

Yes - this was definitely not a significant aspect of the argument - it was just the initial datapoint which motivated us to do a deeper performance investigation. 


>> However, we are aware that there are a class of dynamic checks that can be removed by static analysis - in particular intra-procedural use before assignment checks.  We implemented these checks in a Chakra prototype, and even with these, we still see an ~5% slowdown.  
> > 
> > Our belief is that any further removal of these dynamic checks (inter-procedural checks of accesses to closure captured let references) is a much more difficult proposition, if even possible in any reasonable percentage of cases.  

> To understand the general applicability of this results  we need to know what specific static optimizations you performed and evaluate that against the list of plausible optimizations.  I'd also like to understand that the specific coding patterns within the test program could not be statically optimized.

These are good questions.  Paul will be attending the TC39 meeting this week, and can likely talk to specific details.   High level though, we statically eliminate the TDZ checks for references to 'let' within the same closure body as the declaration. 


> > Unless we can be sure that the above perf hit can indeed be easily overcome, I'd like to re-recommend that temporal dead zones for let and const be removed from the ES6 specification.  Both would remain block scoped binding, but would be dynamically observable in 'undefined' state - including that 'const' would be observable as 'undefined' before single assignment.  

> We really don't have enough evidence to come to that conclusion

I'm not as sure.  I'm not convinced we have evidence that TDZ is actually demanded by developers.  I'm more convinced that we have evidence that TDZ makes let strictly slower than var.  The only question seems to be how much, and whether this is significant enough to counter balance the perceived developer demand for TDZ.


>> In particular - the case against temporal dead zones is as follows:
>> 
>> 1. The value of temporal dead zones is to catch a class of programmer errors.  This value is not overly significant (it's far from the most common error that lint-like tools catch today, or that affects large code bases in practice), and I do not believe the need/demand for runtime-enforced protection against this class of errors has been proven.  This feature of let/const is not the primary motivation for either feature (block scoped binding, inlinability and errors on re-assignment to const are the motivating features).

> As far as I'm concerned the motivating feature for TDZs is to provide a rational semantics for const.  There was significant technical discussion of that topic and TDZs emerged as the best solution. An alternative argument you could make would be to eliminate const.  Is there a reason you aren't making that argument?

I'm not as convinced that a const which is undefined until singly assigned is "irrational".  When combined with a 'let' which can be observed as 'undefined', I believe developers would understand this semantic.  The optimization opportunity for 'const' remains the same - it can be inlined whenever the same static analysis needed to avoid TDZ checks would apply.  

I am not arguing for eliminating const because 'const' at least has a potential performance upside and thus can motivate developer usage.  I am honestly more inclined to argue for eliminating 'let' if it ends up having an appreciable performance cost over 'var', as it's upside value proposition is not strong.


>> 2. The stated goal of 'let' is to replace 'var' in common usage (and if this is not the goal, we should not be adding 'let'). 

> There is actually some disagreement about that statement of the goal.  The goal of let is to provide variable that are scoped to the block level.  That is the significant new semantics that is being added.  The slogan-ism isn't the goal.

This strikes at a critical piece of the discussion around 'let'.  Adding a new fundamental block scoped binding form ('let') has a very significant conceptual cost to the language.  If it is not the expectation of the committee that new code will nearly universally adopt 'let' instead of 'var', and that books will be able to state 'use let instead of var', then I think that brings into question whether 'let' is still passing the cost/value tradeoff.  This tradeoff gets increasingly weaker as additional performance overheads are entered into the cost bucket.


>> 3. Unless the above performance hit can be overcome, and given #2 above, *let will slow down the web by ~5%*.

> As covered above, this is a bogus assertion without data to support it.

I have to push back on this a bit.  We of course don't have shipping, fully-optimizing, implementations of let/const yet.  But we are doing early prototyping, and contributing input based on the performance investigations we can do so far.  The best data we have so far, even after a pass of significant optimizations targeted at eliminating TDZ overhead, shows a significant remaining cost.  That said, it is reasonable to expect we can find further optimization opportunities, so you are right that it's too early to stick a precise number on this.


>> 4. Even if the above performance hit can be (mostly) overcome with net new engine performance work, that is performance work being forced on engine vendors simply to not make the web slower, and comes at the opportunity cost of actually working on making the web *faster*.  

> Again, isn't this really a question about the value of const.

I would have seen it as the opposite - 'const' actually enables some new optimizations relative to existing web content.  


>> 5. We are fairly confident that it is not possible to fully remove the runtime overhead cost associated with temporal dead zones.  That means that, as a rule, 'let' will be slower than 'var'.   And possibly significantly slower in certain coding patterns. Even if that's only 1% slower, I don't think we're going to convince the world to use 'let' if it's primary impact on their code is to make it slower.  (The net value proposition for let simply isn't strong enough to justify this).

> I think you an fairly easily prove that there are use cases where the TDZ can not be statically eliminated  But that does not man that on average and for a typical program "let will be slower than var".

I don't understand this argument.  I believe it exactly means that 'let will be slower than var' in the aggregate for a typical program.  'let' is certainly not going to be faster than 'var' in any case, so if there are any cases at all where TDZ checks cannot be removed, 'let' is in aggregate slower than 'var'.  I don't think we can (or should!) expect developers to have to think about what patterns cause TDZ checks to be unavoidable, and use var in those cases instead.  So I'm concerned about the ultimate message to developers being 'let makes your code slower'.  I agree that the important question is about the magnitude of this aggregate overhead cost, which is what we have been trying to gather concrete data on.


> Again, you might be better served to argue that block level lexical scoping is slower than a single contour function level scoping. However, that's an argument that's a forty year old argument that really should have to be reopenned.

I don't think there is a need to claim this.  


> > 6. The only time-proven implementation of let/const (SpiderMonkey) did not implement temporal dead zones.  The impact of this feature on the practical performance of the web is not well enough understood relative to the value proposition of temporal dead zones.

> And it has a pretty bogus semantics for const.  But generally this isn't relevant as FF let/const can not be interoperably used on the web so it doesn't tell us much about the practical perf of the web. 

A primary motivation for let/const has been the 'defacto standard' line of reasoning based on existing experience and usage.  Can we really then turn around and say that we can't rely on any of the experience we have with the existing features because they have bogus semantics?


>> __ Early Errors__
>> 
>> Let and const introduce a few new early errors (though this general concern impacts several other areas of ES6 as well).  Of particular note, assignment to const and re-declaration of 'let' are spec'd as early errors. 
>> 
>> Assignment to const is meaningfully different than previous early errors, because detecting it requires binding references *before any code runs*.  Chakra today parses the whole script input to report syntax errors, but avoids building and storing ASTs until function bodies are executed [2].  Since it is common for significant amounts of script on typical pages to be downloaded but not ever executed, this can save significant load time performance cost.  
>> 
>> However, if scope chains and variable reference binding for all scopes in the file need to be established before any code executes, significantly more work is required during this load period.  This work cannot be deferred (and potentially avoided entirely if the code is not called), because early errors must be identified before any code executes.
>> 
>> This ultimately means that any script which mentions 'const' will defeat a significant aspect of deferred AST building, and therefore take a load time perf hit.  
>> 
>> More generally - this raises a concern about putting increasingly more aggressive static analysis in early errors.  It may, for example, argue for a 3rd error category, of errors that must be reported before any code in their function body executes.  But more likely, it just argues for allowing any heavy static analysis to be postponed to late errors (or removed entirely and left to lint tools, if the raw overhead is particularly significant).

> I think you may have a stronger point, if this is really the actual basis of your concern. Perhaps there is call for a 3rd category of errors, that can be differed beyond initial parse.  However, I think const is probably the least of your concerns in this regard.  It would be useful to get deeper into what other semantics we are considering have impact on deferred AST construction.

This is indeed a separate concern from the TDZ issue, and both are things that I believe are concerning from an overall performance impact perspective.  But you are right - this issue is much broader than const/let, and relates to a whole class of new static checks being added in ES6 and the potential load time cost they incur.

I'll pull together a list of the concerning new checks in the current ES6 drafts for discussion at this week's meeting.  As Andreas noted, I expect modules will bring another significant batch of these which is not yet in the spec.

Luke





More information about the es-discuss mailing list