On dropping @names

Andreas Rossberg rossberg at google.com
Tue Dec 4 06:32:43 PST 2012


On 4 December 2012 14:28, Claus Reinke <claus.reinke at talk21.com> wrote:
> Could you please document the current state of concerns, pros and
> cons that have emerged from your discussions so far? You don't
> want to have to search for these useful clarifications when this topic
> comes up again (be it in tc39 or in ES6 users asking "where is private?").

There were various mixed concerns, like perhaps requiring implicit
scoping of @-names to be practical in classes, their operational
generativity perhaps being a mismatch with their seemingly static
meaning in certain syntactic forms, potential ambiguities with what @x
actually denotes in certain contexts. And probably more. Most of that
should be in the meeting minutes.

> Implicit scoping in a language with nested scopes has never been a
> good idea (even the implicit var/let scopes in JS are not its strongest
> point). Prolog got away with it because it had a flat program structure
> in the beginning, and even that fell down when integrating Prolog-like
> languages into functional one, or when adding local sets of answers.

Indeed. (Although I don't think we have implicit let-scopes in JS.)

> This leaves the "generativity" concerns - I assume they refer to
> "gensym"-style interpretations? ES5 already has gensym, in the
> form of Object References (eg, Object.create(null)), and Maps
> will allow to use those as keys, right?
>
> The only thing keeping us from using objects as property names
> is the conversion to strings, and allowing Name objects as property
> names is still on the table (as is the dual approach of using a
> WeakMap as private key representation, putting the object in the
> key instead of the key in the object).

Symbols will definitely still be usable as property names, that's
their main purpose.

The main technical reason that arbitrary objects cannot be used indeed
is backwards compatibility. The main moral reason is that using
general objects only for their identity seems like overkill, and you
want to have a more targeted and lightweight feature.

> So I'm not sure how your concerns are being addressed by
> merely replacing a declarative scoping construct by an explicitly
> imperative gensym construct?

We have the gensym construct anyway, @-names were intended to be
merely syntactic sugar on top of that.

> There is a long history of declarative interpretations of gensym-
> like constructs, starting with declarative accounts of logic variables,
> over name calculi (often as nu- or lambda/nu-calculi, with greek
> letter nu for "new names"), all the way to pi-calculi (where names
> are communication channels between processes). Some of these
> calculi support name equality, some support other name features.
>
> The main steps towards a non-imperative account tend to be:
>
> - explicit scopes (this is the difference to gensym)
> - scope extrusion (this is the difference to lambda scoping)

Scope extrusion semantics actually is equivalent to an allocation
semantics. The only difference is that the store is part of your term
syntax instead of being a separate runtime environment, but it does
not actually make it more declarative in any deeper technical sense.
Name generation is still an impure effect, albeit a benign one.

Likewise, scoped name bindings are equivalent to a gensym operator
when names are first-class objects anyway (which they are in
JavaScript).

> As Brendon mentions, nu-scoped variables aren't all that different
> from lambda-scoped variables. It's just that most implementations
> do not support computations under a lambda binder, so lambda
> variables do not appear to be dynamic constructs to most people,
> while nu binders rely on computations under the binders, so a
> static-only view is too limited.

I think you are confusing something. All the classical name calculi
like pi-calculus or nu-calculus don't reduce/extrude name binders
under abstraction either.

/Andreas


More information about the es-discuss mailing list