Weak references and destructors
Mark S. Miller
erights at google.com
Sat Dec 12 14:14:50 PST 2009
On Fri, Dec 11, 2009 at 12:45 AM, Erik Corry <erik.corry at gmail.com> wrote:
> 2009/12/11 Mark S. Miller <erights at google.com>:
> > [...] However, I
> > agree that these proposals should be decoupled if possible. Accordingly,
> > have kludged [...]
> I really dislike this definition. This would imply that anyone could
> overwrite setTimeout and get a completely different behaviour. If
> overwriting is impossible then it introduces setTimeout into the
> standard by the backdoor.
> I'd prefer an underspecified [[QueueForProcessing]] operation with no
> connection to the global object and a note to say that in a browser it
> would be expected to use the same mechanism as a setTimeout with a
> timeout of zero.
> I agree. Done. To be consistent with the spec style on the rest of that
page -- perhaps a bad idea -- I called your [[QueueForProcessing]] operation
POSTPONE. This is a minor issue and I'm not attached to the choice. In any
case, the most relevant new text is at <
Thanks for the suggestion.
> There are lots of misunderstandings around GC, where people expect
> this sort of callback to happen at some predictable time. If there's
> no memory pressure then there's no reason to expect the GC to ever be
> run even if the program runs for ever. It would be nice to have some
> indication in the text of the standard that discouraged people from
> expecting a callback at some predictable time. For example if people
> want to close file descriptors or collect other resources that are not
> memory using this mechanism it would be nice to discourage them
> (because it won't work on a machine with lots of memory and not so
> many max open fds).
Does the current text clarify this to your satisfaction?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the es-discuss