extending an ES6 class using ES5 syntax?

Jason Orendorff jason.orendorff at gmail.com
Mon May 16 17:31:08 UTC 2016


In short, cache invalidation is hard. Standard disclaimer: everything
below is a radical simplification of what really goes on in a JS
engine...

When a property access (or equivalently, a method call) happens, the
standard says to do a lookup along the prototype chain to find out
where the property actually lives.

This takes time. So implementations cache property lookup results. But
that turns out to tricky. The next time that line of JS code runs, you
may be accessing a different object, or the object may have been
mutated somehow. How do we know the cached result still applies? Well,
there are two ways.

1. We can check each time the code runs, to make sure the object this
time is similar enough to the object last time, and the cached result
is still valid. Checking still takes time, but not as much time as a
full lookup.

2. We can say, ok, this cache entry is guaranteed to be valid as long
as X Y and Z don't happen -- we can make a list of invalidating
events, such as the property being deleted, or an unexpectedly
different kind of object being passed in to this line of code. And
then the engine has to notice when any of those things happen and
purge corresponding cache entries. This is faster than approach #1 --
but then unexpected events deoptimize your code.

Note how approach #2 turns the "cached result" into a kind of
performance *assumption*. The code runs fast until the assumption gets
broken. Such assumptions even get baked into jit code, and then
instead of "purging" a cache entry we have to throw away a bunch of
compiled machine code and start fresh with less-optimistic
assumptions. This is not even rare: it is a totally normal thing that
happens...

Anyway, the point of all that is, changing the [[Prototype]] of an
object is one of these events that can invalidate lots of cached
results at once. Neither approach to caching can cope with that and
still run at full speed.

I guess I should note that if the change happens early enough during
library setup, and then never happens again at run time, some
implementations might cope better than others. I think ours sets a bit
on the object that means "my [[Prototype]] has been changed" and
invalidates some kinds of cached results forever after, because that
is the simplest thing. We could always make more corner cases fast by
making the engine even more complicated! But if you got through all of
the above and you're thinking "well, you could just add another hack,
it's only a little hack" then maybe you are not thinking about JS
engines as software that has to be maintained for the long haul. :)

`B.prototype = Object.create(A.prototype)` is less of a problem, for
our implementation, because objects created by constructor B later get
a prototype chain where every object is clean (none of them have ever
had their [[Prototype]] changed; so no assumptions have been
invalidated).

-j


On Sun, May 15, 2016 at 9:53 AM, Michael Theriot
<michael.lee.theriot at gmail.com> wrote:
> Is there a reason Reflect.setPrototypeOf(B.prototype, A.prototype) can't be optimized on class declaration the same way B.prototype = Object.create(A.prototype) is?
> _______________________________________________
> es-discuss mailing list
> es-discuss at mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss


More information about the es-discuss mailing list