Two interoperable implementations rule
Allen.Wirfs-Brock at microsoft.com
Fri Jul 11 17:03:22 PDT 2008
A few thoughts on the general topic and various points that are been raised:
Overall, I think this is a good idea. My personal opinion is that standardization should follow proven utility, not the other way around. However, it's difficult to get meaningful use of proposed web standards until there is ubiquitous implementation. If we, as a community, can find a way to meaningfully work together to advance web technology it will be a very good thing.
Realistically, I think it has to be real browser-based implementations. However, Maciej's at least one browser implementation suggestion may be good enough. My perception is that we have far more unresolved "will it break the web" arguments then we do about the actual utility of features. Let's just demonstrate it on the web, one way or another. BTW, I think this puts us (Microsoft) at a disadvantage because we have self imposed restrictions that currently make it much harder for us to publicly demonstrate (or even discuss) browser changes or enhances than it would be for any sort of standalone implementation we did. We'll have to learn how to deal with it.
Conversely, I don't think a "reference implementation" really makes the cut, even if it is hosted in a browser. However, there isn't necessarily a sharp line between a reference implementation and a simplistic or naïve "production" implementation so maybe the browser hosted requirement is as close as we can come to pinning that down.
I'm ambivalent on the single feature or entire specification question. For a spec. on the order what is being proposed as ES3.1 I don't think an entire spec. requirement would be unreasonably burdensome. For more complex feature sets I'm less sure. A related question is what does it take for a feature to even get into a proposed specification? You can't require complete implementation of a spec. that is not yet complete. Is a public, browser based implementation a pre-requisite for even getting to the state of a feature proposal? I could argue both sides of that one.
Feature interactions are often a source of unanticipated problems. That argues for testing an entire specification.
Is there a time limit in which implementations must occur? How do we actually reach the point of "shipping" a revised specification.
An implementation, without some way to validate it seems of limited use. Should we also expect those who propose features to also provide a validation suite?
How fine a granularity do we push this to. Some of the sorts of clarify changes to existing specification language or changes to specification models that are needed to accommodate new features may not even be directly testable if their intent is no net change to the existing feature set.
Don't take this a "vote" yet, as we certainly need to have some internal discussion on the topic. However, I don't see how such a requirement would be in any way inconsistent with how Microsoft currently thinks about the process of web evolution.
More information about the Es4-discuss