using Private name objects for declarative property definition.

Andreas Rossberg rossberg at
Tue Jul 12 02:54:38 PDT 2011

On 9 July 2011 17:48, Brendan Eich <brendan at> wrote:
> Adding names as a distinct reference and typeof type, extending
> WeakMap to have as its key type (object | name), adds complexity
> compared to subsuming name under object.

It seems to me that you are merely shifting the additional complexity
from one place to another: either weak maps must be able to handle
(object + name) for keys (instead of just object), or objects must
handle (string + object * isName) as property names (instead of just
string + name). Moreover, the distinction between names and proper
objects will have to be deeply engrained in the spec, because it
changes a fundamental mechanism of the language. Whereas WeakMaps are
more of an orthogonal feature with rather local impact on the spec.
(The same is probably true for implementations.)

>> Why do you think that it would that make WeakMap more complicated? As
>> far as I can see, implementations will internally make that very type
>> distinction anyways.
> No, as proposed private name objects are just objects, and WeakMap implementations do not have to distinguish (apart from usual GC mark method virtualization internal to implementations) between names and other objects used as keys.
>> And the spec also has to make it, one way or the other.
> Not if names are objects.

I think an efficient implementation of names in something like V8 will
probably want to assign different internal type tags to them either
way. Otherwise, we'll need extra tests for each property access, and
cannot specialise as effectively.

> I'm not sure which "class" you mean. The [[ClassName]] disclosed by
>,-1) is one possibility, which is
> one of the many and user-extensible ways of distinguishing among
> objects. class is just sugar for constructor/prototype patterns
> with crucial help for extends and super.

I meant the [[Class]] property (I guess that's what you are referring
to as well). Not sure what you mean when you say it is
user-extensible, though. Is it in some implementations? (I'm aware of
the somewhat scary note on p.32 of the spec.) Or are you just
referring to the toString method?

I appreciate the ongoing discussion, but I'm somewhat confused. Can I
ask a few questions to get a clearer picture?

1. We seem to have (at least) a two-level nominal "type" system: the
first level is what is returned by typeof, the second refines the
object type and is hidden in the [[Class]] property (and then there is
the oddball "function" type, but let's ignore that). Is it the
intention that all "type testing" predicates like isArray, isName,
isGenerator will essentially expose the [[Class]] property?

2. If there are exceptions to this, why? Would it make sense to clean
this up? (I saw Allen's cleanup strawman, but it seems to be going the
opposite direction, and I'm not quite sure what it's trying to achieve

3. If we can get to a uniform [[Class]] mechanism, maybe an
alternative to various ad-hoc isX attributes would be a generic
classof operator?

4. What about proxies? Is the idea that proxies can *never* emulate
any behaviour that relies on a specific [[Class]]? For example, I
cannot proxy a name. Also, new classes can only be introduced by the

5. What are the conventions by which the library distinguishes between
"regular" object properties and operations, and meta (reflective)
ones? It seems to me that part of the confusion(?) in the discussion
is that the current design makes no real distinction. I think it is
important, though, since e.g. proxies should be able to trap regular
operations, but not reflective ones (otherwise, e.g. isProxy wouldn't
make sense). Also, modern reflective patterns like mirrors make the
point that no reflective method should be on the reflected object


More information about the es-discuss mailing list