minutes, TC39 meeting Tues 5/22/2012

David Herman dherman at mozilla.com
Wed May 23 13:30:28 PDT 2012


These are a combination of my edits of Rick's notes from the first topic, my notes from subsequent topics, and then my reconstruction from memory of the conversation on the last topic. So for those present, please feel free to correct the record.

Dave

----------------------------------------------------------------------

People:

BT: Bill Ticehurst, Microsoft
LH: Luke Hoban, Microsoft
BE: Brendan Eich, Mozilla
DH: Dave Herman, Mozilla
AWB: Allen Wirfs-Brock, Mozilla
AR: Alex Russell, Google
EA: Erik Arvidsson, Google
MM: Mark Miller, Google
RW: Rick Waldron, jQuery
YK: Yehuda Katz, jQuery
DC: Doug Crockford, eBay
OH: Ollie Hunt, Apple

Topic: Binary data

DH: typed arrays were designed for two use cases:
- shipping data from CPU -> GPU for WebGL
- doing binary file/network I/O
DH: former requires matching the endianness expected by the GPU for interpreting shader scripts (e.g. that read bytes as int32 or float64), so they designed it to use the system's native endianness
DH: latter requires explicitly specifying endianness, which DataView API allows
DH: but casting ArrayBuffers to different types exposes system endianness, and basically the entire web is implemented on little endian systems, so the web is being written with the assumption of little endian, despite not being specified
DH: we should specify little endian. bi-endian systems (which many modern little-endian systems actually are) have HW support for little-endian; but big-endian systems can implement byte swapping by compiling shaders to do the byte swapping themselves
YK: I agree; hard for devs to reason about endianness. expecting large use cases for binary data?
DH: yes; e.g. crypto algorithms, optimizing bignum libraries; I wrote a "float explorer" that displays bit patterns of IEEE754 doubles by casting, and discovered after months that my explicit check for system little-endianness was flawed and always produced true, but I couldn't test so I only discovered the flaw by chance.
AWB: agree that most devs will not understand endianness differences
LH: do expect some big-endian UA's
DH: right, in particular game consoles; some have hardware support for bi-endian; how robust that support is is unknown. but can still implement byte-swapping by compiling shaders.
DH: right now, there's no one implementing big-endian browsers making the case that they can't implement little-endian, and meanwhile the web is being written to assume little-endian.
YK: if we standardize little-endian, does it become impossible to discover the system's endianness?
DH: yes, but we could add a feature-detection API that explicitly provides that information. you wouldn't need it though.
DH: if real perf issues arise, willing to consider little-endian by default with explicit opt-in to big-endian. still possible to write unportable code (e.g. blind copy-paste programming), but at least less likely. but let's not solve the problem till we know we have it
AWB: endianness is clear for integers, but what about floats? aren't there other variations e.g. what order the mantissa and exponent and sign go in?
DH: not sure, but we should just standardize whatever format the web is using.
AWB: yes. if you know you need big endian, you'll do the conversion
DH: mostly that's probably just for I/O, where DataView does that for you automatically
DH: standardize little endian?
EA, YK: de facto standard.
LH: I need to contact people on my end; there may become big-endian browsers
DH: broadly speaking, little-endian has won in the hardware world. regardless, this is becoming the de facto standard. HW support will likely get better and better, and leaving it unspecified is solving a problem that doesn't exist, while creating issues of its own.
BT, BE: DataView defaults to big-endian
DH: yes, people are unhappy about that inconsistency. we could change DataView to default to little-endian for consistency. but in the world, big-endian has won as the network byte order, while little-endian has won as the CPU byte order.
BE: a foolish consistency!
DH: right-- the defaults are modelled on reality, and reality is lumpy
BE: yes. <<provides some historical insight>>
YK: what observable effects would this change have on WebGL?
DH: likely none. AFAIK when you create data for WebGL, you don't use casting in ways that care about endianness, you just ship to the GPU, and that's what's sensitive to the endianness
YK: could we eliminate the observable effects by just converting all the data en masse?
DH: no, not that simple; you have to know which bytes need to be swapped
DC: I thought we were going to replace typed arrays. can we just leave it out?
BE: they'll go to W3C. this is reality; we must embrace
DH: we can embrace and then create better, more ergonomic extensions
DC: tell me more
DH: typed arrays are views over ArrayBuffers, which are buckets of bytes; we add a new view type of structs:

    S = struct({
        x: uint8,
        y: uint8
    });

S is not a struct object but a struct type. then:

    x = new S

creates a new instance of this struct type

    A = Array( S )

again, a type, not an instance

YK: that's a little confusing
DH: whole idea is an API for creating new types, so there's a meta-level here that is inherent complexity; can certainly bikeshed API names later
LH: idea we discussed before was making the .buffer null of a struct unless you explicitly constructed it wrapping an existing buffer
DH: yes, and once we do that, we can add pointer types:

S = struct({
    x: uint32,
    foo: object
})

DH: for security, *absolutely cannot* expose the buffer to casting
DH: also, enforce alignment constraints to maintain invariant that normal ArrayBufferViews never have unaligned access; will write this up and could use help checking my logic
AWB: isn't there an inconsistency about when there's padding and when there isn't?
DH: no, if you create an unaligned struct type:

S = struct({
    x: uint8,
    y: uint32
})

then if you construct it fresh, you can't observe the padding, and if you wrap it around a buffer, you get an error:

o = new S; // can't access buffer
O = new S(buf, 2) // error: unaligned type

DH: as always, you can do unaligned access via data view:

d = new DataView(buf, o)

v = d.get(S, 17) // unaligned read

DH: in that example, v is an object pointing to index 17 in buf, and property accesses do the proper logic to perform unaligned access
LH: since strings are pointers, would also be good to be able to have a string type
BE: Waldemar wanted that a while back
DH: I think he was talking about inline strings, which is best treated as a byte array and use Josh Bell's encoding/decoding API (in progress); but I agree we should make it possible to have all JS values; string type is a no-brainer, +1
<<everyone generally favorable>>


Topic: Classes

LH: anything we standardize finalizes basic choice of syntax; maxmin decides we're going to have a new kind of body instead of trying to make class be a constructor-like concept
AWB: we need to agree on something, so let's agree on this basic structure
MM: constructor syntax is DOA for good reason, b/c of the scope issues; in my proposal, class parameters *were* in scope but methods were instance methods; but this was a non-starter for implementation
LH: I think there are ways to mitigate: put constructor locals in scope but poison them
RW: that's too confusing
BE: howling, screaming wart
MM: what's the advantage over maxmin?
LH: syntactic weight is fairly significant
AR: there's weight, but it's a syntactic entryway to more features
LH: blocked off one key door, which is the constructor syntax which is the shortest and simplest of all possibilities
MM: one of the big advantages of JS is simple lexical scope; confusion of poisoned names is more important cognitively and the shorter syntax doesn't compensate
DH: what's the syntactic convenience?
LH: one less level of indentation
YK: at cost of clarity
RW: and you added public!
BE: which cost is worst, public keyword or not having the thing you're calling new on be hoisted to the top?
DH: public keyword is almost outright hostile to programmers; goes completely against the rest of the way JS works
YK: given that maxmin is so minimal, I would like some escape valves for making it something we can build on (extension hooks)
AWB: we need consensus on base level first, before we go into extensions to the proposal
AWB: could you live with maxmin alone?
YK: no. I will receive pressure to use class syntax for my existing class systems, but without at least one escape valve, I will have to choose between saying no or saying yes and killing features
AR: there'll be needs for things like mixins
EA: that's been debunked; RHS is an AssignmentExpressions; mixins are perfectly possible
AR: but there will be various things that can't be done
AWB: you can do this with a function that implements your hook
YK: that asks my users to call a special function
MM: there's simply no proposal for this additional feature for us to evaluate
AR: will you attempt to stone maxmin before they get out the gate before we can agree?
YK: well, no
LH: I won't try to stone classes, but I really hope if we're going to put something in the spec, a) it can actually be used in most cases where most people want to use it, and b) it be forwards-compatible with addressing remaining places
AWB: I wouldn't be behind this if I didn't think both those things are true
YK: I think those things are true; I am personally concerned that I will receive requests for things I can't implement
AWB: I think people will have issues in the short-term but they'll eventually improve
DH: rollout and adoption are important; YK, I'd like to know more specifically what you can do now that you can't do with maxmin
RW: try/catch never would've happened if people said "I can't use this now"; in 3 years, I think you'll rebuild Ember with classes
YK: here's why I know it's a real issue: people want Ember classes to work with CoffeeScript classes, and I have these exact issues
AR: no one's suggesting that it's impossible to do more than maxmin -- even in ES6 time frame -- but we need to agree on base foundation
DH: I think Luke's Ocaml syntax is too confusing, it doesn't win enough for convenience, and it's less mainstream
BE: Ocaml! I knew it!
LH: this clearly has limited amounts of support
AR: we've been through this part of the design process and sympathize, but we've already come to conclusion it doesn't work
MM: keeping scope simple is the most important thing IMO
LH: but what I miss is the instance properties, the ability to see shape of the object is very valuable, e.g. for completion lists in tools
AWB: if you have maxmin as starting point, there are ways to address it
LH: the proposed extensions I've seen are awkward
AR: this has been solved in other languages e.g. C++ initialization list
LH: I won't stand in the way
DH: pulse?
LH: I think Waldemar was totally opposed, I'm not totally opposed, just had concerns
DC: my concern: this isn't new capability; but if it doesn't address all the needs, it'll just add more confusion and won't help
AWB: there's significant debate on a lot of things that could make things easier, but there's a basic structuring that is useful: it's useful not to have to wire up the prototypes properly
DC: but a lot of people are doing this with libraries
AR: but then they're not interoperable
DC: I don't feel compelled to go forward with something we can't be sure isn't right
YK: I see this being on the way to that, but not there yet
BE: you want something, Waldemar wants something else...
YK: I think it's ok if it's a subset, but we should be clear about that, so I can hold the line against using it
DC: tactically I'm opposed to that; if we're on the way but not there, let's do nothing
AR: I was in your camp some time ago. so much complexity to build a proposal that hangs well together that you will have to throw some use cases overboard. we have a long history of not doing classes on the basis that we should wait to do it right. but we also have a long history of iterating on and improving on things that are already in the language. I have faith that we will work to iterate
DC: I'm saying we shouldn't ship until we're sure we're right
DH: design does not have empirical or rationalistic methods of validation. all you can do is discuss, prototype, build examples, and try it out, all of which are part of the Harmony process
DC: I'm fine with consensus
BE: Waldemar wants the ability to have "final" or "sealed" objects that can have errors when mis-named properties, which Mark would like as well
AR: I thought Waldemar wanted *only* that
MM: I think this is a subset of the original class proposal from last spring that Waldemar was on board with; the restrictions were only on const classes
BE: other than future-hostility concerns, which aren't falsifiable
MM: can check whether this is forward-compatible with the May proposal
AWB: that wasn't sound, it was just a whiteboard sketch
BE: Mark's saying that we can grow in the direction of the things Waldemar wants
AWB: I specifically asked Waldemar to say specifically what he thought this was interfering with, but he hasn't come back with anything; don't see why that couldn't be added with more syntax
AR: ISTM there's a fight over defaults
DH: if const is default, we will all be burned at the stake
AR: can't we get something useful if we don't solve the const problem?
BE: it's not clear to me people are asking for what Waldemar is asking for
MM: let's not spend time guessing Waldemar's position
BE: sure, let's just separate the default question from const question; can we agree const should not be default?
MM: I think that was always the agreement since last May
BE: I think when Waldemar returns, we may have to find a way to get past const, to achieve consensus
AWB: I think we should move forward, start specifying maxmin, knowing that we can remove it later
YK: I would like to make a stake in the ground
MM: until Waldemar can make his case, this isn't consensus
AR: there's some room here to suggest we can't wait forever
EA: can we start saying let's build on this instead of waiting another two months?
MM: let's not push until Waldemar has returned
AR: OK, but we do have a deadline, and there's been plenty of time to register complains
EA: I would just like to start prototyping
AWB: and I would like to start doing spec work
MM: sure, we should all push on all fronts, but we don't have consensus without Waldemar's agreement
BE: correct


Topic: __proto__

OH: it's a getter/setter in JSC
MM: one sane approach: refuses to change prototype of object born in another global; other sane approach: refuses to modify prototype of any object that inherits from that context's Object.prototype
DH: what about the getter?
MM: always works; equivalent to Object.getPrototypeOf
AWB: exposing the setter as a function means that any method call with one argument could potentially be a proto modifier
DH: can't you already do that with a function that delegates to .__proto__ ?
AWB: if you thought you deleted Object.prototype.__proto__ but someone squirreled away a copy of the setter
BE: I just don't like it
MM: not fatal for security
BE: we're inventing something new that's not been tested on some pure aesthetic of not wanting magic data properties. I just don't like it
DH: when we already do that with array.length and indexed properties anyway!
BE: this is a turd we do not want to polish
MM: I don't like the way it's specified
DH: sure. I think we agree about the behavior
BE: agreement: it's in Annex B, it'll be a pseudo-data property
MM: I don't think that's clear
EA: I would pick an accessor
BE: this isn't a clean-slate design experiment!
MM: b/c Firefox doesn't implement the accessor?
BE: b/c no one other than JSC very recently
MM: we can analyze the security properties
BE: this is not about security, it's about unintended consequences
YK: is there an advantage to making it an accessor?
MM: yes: the actual action is that it has a magic side effect
BE: that's an aesthetic preference
OH: the pre-accessor behavior of JSC was that every object instance had the magic property; from our point of view, being able to extract the accessor is in no way different from what you could already do
BE: acid test: `o.__proto__ = null` -- what happens?
OH: in old JSC, it remained magic
DH: comes down to distaste of another magic data property vs distaste of a portable, well-specified proto-updater function
YK: people will use it
BE: and maybe someone will come up with some zero day, I can't predict that
DH: it's strictly more risk for sake of a purely aesthetic reason, in an already aesthetically nasty space
BE: yes, should be a risk analysis, not an aesthetic analysis
YK: I agree
MM: I would like a more modular specification than poisoning the semantics of [[Get]] etc; but think of how expensive the magic of array length has been
BE: this is already shipping in a bunch of browsers, and in some of them it's already a magic property; the developer-facing cognitive load is a wash; developers just want it to work, they don't care whether it's an accessor
DH: I can predict the security bugs: the implementor just thinks about the normal case, but the attacker takes the accessor out, installs it on an object that inherits from a proxy to an object from another global etc. etc. and something internal breaks
MM: that's the most compelling argument I've heard. the additional testing surface area is much bigger
DH: Arv?
EA: I'm not gonna block this
OH: I think it would be nice to have a mechanism that let you specify getters/setters that weren't extractable
DH: actually proxies basically *are* such a mechanism
AWB: I don't like getters/setters that can't be reflected


Topic: Spec terminology

AWB: confusion about "host object"; I'd like to eliminate concept of "host object" entirely, introduce new terminology
- object: runtime entity w/ unique identity and exposes properties
- mundane object: object that uses default behaviors for chapter 8 internal methods
- exotic object: object that provides non-default behavior for at least one of the internal methods
DH: I would say that "plain" is better than "mundane"; also value objects, were they to succeed in Harmony, don't have identity
AWB: we can cross that bridge when we come to it
DH: anyway I like this; makes clearer where the extension points of the language are
AWB: note that proxies are exotic
MM: are array instances exotic?
AWB: yes
DH: is that a problem?
MM: no, just surprised me
AWB: I'd like functions and other "normal" things to be classified as mundane
DH: but arrays should still be exotic?
AWB: yes
DC: Ecma members can be "ordinary"
AWB: ok, "ordinary" and "exotic"
AWB: another dimension of classification?
- standard object: defined by ES specification
- built-in object: provided by ES implementation but not defined in ES specification
- platform object: provided by host environment
<<mass confusion>>
AWB: I agree these distinctions are unclear
DH: I think the first dimension was great! it moves us in the direction of not *needing* the second dimension
AWB: I agree, and I will see if it's possible to eliminate the second dimension
AWB: now, a dimension for functions:
- function: an object that exposes the [[Call]] internal method
- ECMAScript function: function whose invocation result & side effects are provided by evaluating ECMAScript code
- alien function: function whose invocation result & side effects are provided in some manner other than by evaluating ECMAScript code
- standard function: function whose invocation result and side effects are defined by the ECMAScript specification (mostly ch 15)
DH: why do we need this? job of implementation is to make it indistinguishable that a function is alien
AWB: setting up initial environment has to keep track of variable environment etc
BT: it's unobservable
DH: you can't tell!
LH: are you saying there's a flaw in the current ES5 spec?
AWB: when you activate a chapter 15 function, you don't go through any of the normal process
MM: Function.prototype.toString has requirements on functions that *must* be ECMAScript functions; so there's a distinction between "must-be" and "may-not-be"
AWB: but Chapter 15 functions need access to the current global's intrinsics (e.g. for "initial meaning of new Array" etc)
BE: you can get to that via [[Scope]]
DH: I would rather eliminate the alien function distinction, and the steps in Chapter 15 can be implemented in pure ES
AWB: I'll just have go through and see how many distinctions I can eliminate
MM: what are the "intrinsics"?
DH: things like where the spec says "as if via `new Array` for the initial value of Array"; loaders rationalize this, and each loader has its own set of intrinsics
YK: "realm" is used in basic auth
DH: "realm" is a little thesaurus-y; I like "context" but "ExecutionContext" used in ES already, so "global context"
AWB: "context" doesn't feel big enough
MM: so [[Scope]] gets you to the context?
AWB: you can share global objects between loaders!
DH: oh right, so [[Scope]] is not it -- needs its own [[Context]]
AWB: top-level environment record gets you to it
BE: don't have two independent fields that are required to agree
DH: yeah, as long as we can get to the global context via the top-lex environment record, we should just get to it via that
DH: popping the stack, I like "global context"
YK: I like "global context"
BE: "context" is almost meaningless
MM: "shire"... lulz
AWB: "island"?
DH: <<winces uncomfortably>>
DH: "home"?
AWB: not bad... already using for something else but I could rename it


Topic: do-expressions

DH: allows you to evaluate expressions with control flow or temporary variables:

a = do {
    let tmp = f();
    tmp * tmp + 1
};

workaround:

let tmp; // scope pollution!
a = (tmp = f(), tmp * tmp + 1);

workaround:

a = (function() {
    var tmp = f();
    return tmp * tmp + 1;
})()

YK: CoffeeScript gives you the latter
DH: the IIFE doesn't "say what you mean" -- boo boilerplate
YK: also, TCP hazards
MM: strongest argument in favor is for code generators
DH: well, I think the developer convenience is an even stronger argument
EA: I want this most for code generators
MM: strongest or not, it's still compelling for code generators
DH: arguments against that I've heard:
- eliminates the current structure of JS syntax where statements are always at outermost level of a function body
- exposes the completion value of statements, which is only otherwise observable via eval
YK: it's pretty clear what the result is
DC: the "do" syntax will be confusing to those familiar with do-while
YK: you can think of it as do-while-false
DH: I think it reads really naturally as English: "do this" means do it once, "do this while blah" means do it some number of times
DH: pulse of the room?
<<mostly favorable>>
LH: my standard concern: the overall cost of so much new syntax; I would certainly use this, and I like it, but all these conveniences add up in cost
DC: I don't think we ever consider the overall design
DH: sick of that accusation; we constantly consider the overall design
BE: there is an important point about the cost of syntax, and we need to take account of all of the syntax we've proposed
AWB: worth making everyone come up with a priority list?
BE: that's a little "fighty", but we will need to take stock and make some cuts at some point
DH: I imagine I'd cut do-expressions before anything else
EA: I want them more than comprehensions
DH: yeah, I've championed comprehensions for so long I hadn't really reconsidered; I love them but I could see how you might prioritize do-expressions
DH: I want to make a couple points about the big picture:
- there is a meme that TC39 doesn't understand complexity budgets and is out of control
- please do not spread this meme; it's false
- one thing the recent controversy, including JSFixed, has shown me is that TC39 needs to improve our communication
- there's a lot of misinformation, and a lot of unhappy people, when they start talking with us, end up finding out they're actually pretty happy with the direction ES6 is going in
- I will be redoing the entire wiki to focus on community first, committee second, and to make the big picture clearer
- but we all could think about ways to increase our communication with the community
AWB: in some ways the anger is actually a consequence of being open
DH: yes, but we can still improve our communication
BE: we still need to think about whether and what we should scale back
AR: people are resistent to change; once features land they'll use them
DH: tension between needing to grow a language incrementally but about the standards process taking many years between revisions; either underserve users by simply not growing enough, or freak them out by landing many things at once
YK: roll-out of individual features in browsers can help, even preffed off
DH: just want to say that we *do* consider complexity budget, we *have* cut a ton of things, and the ES6 feature set has really strong coherence
AR: go team!

----------------------------------------------------------------------



More information about the es-discuss mailing list