jonas at sicking.cc
Tue May 14 19:08:34 PDT 2013
Actually, mutation observers have some special behavior that only
lasts until the end-of-microtask queue is empty. If you start
observing the mutations that happen in a particular Node subtree
rooted in a node A, you will be told about all mutations that happen
in the nodes that were descendants of A until all end-of-microtask
notifications have fired. So even if a node is removed from A and then
modified, the observer is notified about those mutations as long as
they happen before all end-of-microtask observers have fired.
At least I think that's how I think it works. You'd have to check the
spec for more details.
Possibly this is something that can be changed though.
On Tue, May 14, 2013 at 5:59 PM, Mark Miller <erights at gmail.com> wrote:
> AFAICT, the microtask queue is just another output queue, and the strict
> priority of the microtask queue over other queues is just a policy choice of
> which outgoing queue to service next. The input queue model could not
> guarantee strict priority without creating a two level queue. The outgoing
> queue model keeps this separate with no loss of generality. Cool.
> On Tue, May 14, 2013 at 5:54 PM, Mark S. Miller <erights at google.com> wrote:
>> ---------- Forwarded message ----------
>> From: Mark S. Miller <erights at google.com>
>> Date: Tue, May 14, 2013 at 4:54 PM
>> Subject: Re: Future feedback
>> To: Boris Zbarsky <bzbarsky at mit.edu>
>> Cc: David Bruant <bruant.d at gmail.com>, Sean Hogan
>> <shogun70 at westnet.com.au>, Jonas Sicking <jonas at sicking.cc>,
>> "public-script-coord at w3.org" <public-script-coord at w3.org>
>> I see. I was thinking primarily about incoming queues whereas this
>> formulates the issue primarily in terms of outgoing queues. Rather than have
>> a non-deterministic interleaving of events into the incoming queue, which
>> then services them later, this just moves the non-deterministic choice as
>> late as possible, at the point when the next turn is ready to start. This
>> effectively removes the notion of an incoming queue from the model.
>> Curiously, this is how Ken
>> and NodeKen <http://research.google.com/pubs/pub40673.html> treat the
>> persistent storage of distributed messages. The incoming queues are
>> ephemeral, outgoing messages are not dropped until receipt has been
>> acknowledged, and messages are not acknowledged until processed by a turn
>> that has been checkpointed. On restart a different interleaving may be
>> chosen, which the "incoming queue" model would have a harder time accounting
>> for. I like it. AFAICT, this is a better way to specify communicating event
>> loops in all ways. Thanks!
>> On Tue, May 14, 2013 at 7:03 AM, Boris Zbarsky <bzbarsky at mit.edu> wrote:
>>> On 5/14/13 9:04 AM, David Bruant wrote:
>>> I should note that the description of the browser event loop in that
>>> message is wrong. It does not have only two FIFO queues in the specs, or in
>>> implementations. In particular, see task sources.
>>> I would be strongly opposed to specifying something that requires only
>>> two FIFO queues.
>> es-discuss mailing list
>> es-discuss at mozilla.org
> Text by me above is hereby placed in the public domain
> es-discuss mailing list
> es-discuss at mozilla.org
More information about the es-discuss