Wouldn't being bolder on new constructs save the "breaking the web" costs for future"?

Herby Vojčík herby at mailbox.sk
Mon Jan 7 11:24:08 PST 2013

[repost; first version a few days ago disappeared somewhere]


recently I came over two issues of the very similar pattern, involving 
semantics of new language constructs. In both cases it seems to be that 
being bolder about changes that _new_constructs_ bring (that is, not 
making too little steps) may save the cost of "can't make progress here, 
because it would be breaking change for the web".

(just 7 more paragraphs, pls read on :-) ):

First one was in the thread "Good-bye constructor functions?" where the 
discussed topic were new construct "class" and "super". As it stands 
now, they are buggy (internally inconsistent), and thinking about 
possible solutions I discovered that with `class` there is little need 
to couple class to its constructor, so I proposed leaving this box and 
open the space of "what can be as Foo in `new Foo`" to any [[Construct]] 
bearing entity (that is, so that `class` produces plain object with 
[[Construct]] instead of legacy coupling its identity with the identity 
of the constructor).

(let's pretends that we overcome technical details and the proposal may 
actually work; I showed in the post that it may very well be so; I want 
to discuss the higher pattern of defensiveness vs. later compatibility 
problems here)

One reason I brought up why it would be fine to consider now (not later) 
is: if `class`, as a new language construct, behaves same as legacy 
constructor functions (tightly coupling the identity of class and 
constructor), people using ES6 accept that "this is how `class` works". 
If later we would want to "liberate" space of object usable as 
"constructor" (in `new`), by making `class` return non-coupled class 
objects, it would not be possible "because it would break existing code".

OTOH, if `class` would return class object decoupled from constructor, 
it may impose some tax to refactoring existing class-like constructs 
into `class` keyword, but they adopt that "space of constructors have 
widened" and no backward compatibility problem would appear as with 
splitting it into two steps.

The second one was the reified-nil discussion involving 
pattern-destructuring and existential operator (again let's pretend 
technicalities can be solved). There, the issue was the semantics of 
existential operator (and consequently, refutable destructring, since 
they are coupled) would be simplified by involving nil object into the 
equation; by returning it from existent operator or inside refutable 
destructuring. One possibilty was to include nil object head-on into the 
language (as part of {undefined, null} ==-equivalence group), thus 
making it first-class and make things like `foo = (bar = p?.q).r` work fine.

Another, defensive, possibility is to use nil behind the scenes, but to 
change it to undefined when it becomes visible, with the proposition to 
include first-class nil in ES7. This brings little cost now, but in ES7 
I am afraid of the same effect as in previous case; notably, that there 
already will be code that uses new constructs (refutable destructuring 
or existential operator if included in ES6), but that would be broken if 
ES7 change the semantics to first-class nil. Again, this would not be 
problem if the new constructs bring the new semantics right on.

Am I missing something or is there something about this pattern? That 
new constructs, when being more bolder with its semantics, it can
a) save backward compatibility cost compared to more granular progress;
b) can be used to piggyback new semantics fairly cheaply (since it is a 
new construct in this version so it is more tolareated to bring its new 
semantics with it)?


More information about the es-discuss mailing list