dherman at mozilla.com
Wed Jul 6 14:26:51 PDT 2011
>>> - it is biased towards evaluation, which is a hindrance for other
>>> uses (such as faithful unparsing, for program transformations);
>> It's just a reflection of the built-in SpiderMonkey parser, which was
>> designed for the sole purpose of evaluation. I didn't reimplement a
>> new parser.
> Right. But is that what we'd want from a standard Parser API?
I mentioned SpiderMonkey's Reflect.parse on the wiki page, but I haven't actually proposed it as a standard; it's just currently there as a tool to reflect what SpiderMonkey does.
> The question is: does one augment existing parsers, to enable tool
> building on top, or does one let every tool builder write and maintain
> their own parser?
False dichotomy. If the only way to share code were for TC39 to put it into the standard library, JS would have died a long, long time ago. :)
> Thanks, that is a start. Actually, it will be sufficient for some
> Unfortunately, experience tells me it won't be sufficient for user-
> level program transformations.
Sorry to hear that. I really would suggest you experiment with a pure JS parser for your needs. You could even start with one of the existing ones (there are several) to help you get off the ground.
> Though it might not actually cost much to support the additional
> info in SpiderMonkey: most of it could be in the token stream,
> which is usually thrown away, but could be kept via a flag, and
> the AST's source locations can be used to extract segments of
> the token stream (such as any comments preceding a location).
This is a fantasy, I'm afraid. The parser is big, complex, and heavily optimized.
More information about the es-discuss