On dropping @names

Claus Reinke claus.reinke at talk21.com
Tue Dec 4 05:28:49 PST 2012

>> Recall the main objection was not the generativity of @names mixed with the obj. at foo pun 
>> (after-dot). It was the usability tax of having to declare
>>  private @foo;
>> before defining/assiging
>>  obj. at foo = foo;
>> (in a constructor, typically).
> Good clarification, thanks. Yes, the more important issue is the tension between having to 
> predeclare all the @names in a scope and the danger of implicit scoping. (That said, the 
> generativity does worry me. It's a smell.)

| Just to be super-sure we grok one another, it's not the generativity by
| itself (since nested function declarations are the same, as I mentioned
| in the meeting). It is the generativity combined with the obj. at foo pun
| on Good Old obj.foo where 'foo' is a singleton identifier equated to a
| string property name. Right?

Could you please document the current state of concerns, pros and
cons that have emerged from your discussions so far? You don't
want to have to search for these useful clarifications when this topic
comes up again (be it in tc39 or in ES6 users asking "where is private?").

Implicit scoping in a language with nested scopes has never been a
good idea (even the implicit var/let scopes in JS are not its strongest
point). Prolog got away with it because it had a flat program structure
in the beginning, and even that fell down when integrating Prolog-like
languages into functional one, or when adding local sets of answers.

So starting with explicit scoping, adding shortcuts if necessary
(and only after careful consideration), seems the obvious route
suggested by language design history.

This leaves the "generativity" concerns - I assume they refer to
"gensym"-style interpretations? ES5 already has gensym, in the
form of Object References (eg, Object.create(null)), and Maps
will allow to use those as keys, right?

The only thing keeping us from using objects as property names
is the conversion to strings, and allowing Name objects as property
names is still on the table (as is the dual approach of using a
WeakMap as private key representation, putting the object in the
key instead of the key in the object).

So I'm not sure how your concerns are being addressed by
merely replacing a declarative scoping construct by an explicitly
imperative gensym construct?

There is a long history of declarative interpretations of gensym-
like constructs, starting with declarative accounts of logic variables,
over name calculi (often as nu- or lambda/nu-calculi, with greek
letter nu for "new names"), all the way to pi-calculi (where names
are communication channels between processes). Some of these
calculi support name equality, some support other name features.

The main steps towards a non-imperative account tend to be:

- explicit scopes (this is the difference to gensym)
- scope extrusion (this is the difference to lambda scoping)

the former allows to put limits on who can mention/co-create a
name in a program, the latter allows to pass names around, once
created. With gensym, there is only one creator, all sharing comes
from passing the symbol around while expanding its scope (think:
"do { private @name; @name }").

As Brendon mentions, nu-scoped variables aren't all that different
from lambda-scoped variables. It's just that most implementations
do not support computations under a lambda binder, so lambda
variables do not appear to be dynamic constructs to most people,
while nu binders rely on computations under the binders, so a
static-only view is too limited.

I'm not saying that @names are necessary or have the best
form already - just that I would like to understand the concerns
and how they are addressed by the decisions made.


More information about the es-discuss mailing list