C. Scott Ananian
ecmascript at cscott.net
Wed May 21 11:51:48 PDT 2014
On Wed, May 21, 2014 at 9:59 AM, Dmitry Lomov <dslomov at chromium.org> wrote:
> I think it would be weird if some of them fail hard and some would behave
> as if the length is zero. Consistency is always good.
> Why "fail hard" is more desirable?
It is desirable because it allows for more efficient implementations in the
common case. Quoting Allen Wirfs-Brock:
Also note, that ignoring this new requirement, a Typed Array's length,
> byteLength, and byteOffset are all constants and this fact is used in
> specifying the behavior of the methods that operate upon them. If they can
> change (even to 0) then this can occur on any operation that can trigger
> side-effects. (For example, consider calling the callback function on
> 'map' or similar methods). Do we really want to dynamically reconsider
> changes to ;length' as opposed to simply letting throws to occur on access
> to the neutered ArrayBuffer?
> In particular, I don't want to have to scatter length change checks
> throughout thee algorithms in case one gets neutered as a side-effect of a
> callback or a proxy mediated access.
As a concrete example, when iterating over a typed array inside the `map()`
implementation, it used to be possible to hoist the length of the array out
of the loop, since it is a constant, and the loop can be unrolled if the
length is known to be small. If the length of the array can change, the
length needs to be re-checked on every iteration and unrolling typically
won't happen. You can special-case this, since you know the only way the
length can change is if the array is neutered -- but all this special
casing adds up. It is cleaner (thus, in the absence of special-casing,
faster) to use the existing exception mechanism to fail fast where
necessary (for example, when you try to access the next element of an array
which has been neutered).
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the es-discuss