bruant.d at gmail.com
Thu Aug 8 06:55:56 PDT 2013
Le 08/08/2013 15:38, Domenic Denicola a écrit :
> From: es-discuss [mailto:es-discuss-bounces at mozilla.org] On Behalf Of
>> From what I understand, setTimeout 0 serves that use case and there is no reason for setImmediate to be better at this job.
> This is not true, as can be seen from http://domenic.me/setImmediate-shim-demo/. The clamping inside nested callbacks prevents setImmediate from being as good at this job as postMessage or MessageChannel, so as long as there is still clamping on those (which from what I understand is back-compat-constrained) setImmediate is necessary.
The minimum delay is a mitigation mechanism implemented by browsers to
avoid burning the CPU and leave the page almost non-responsive if a page
has something equivalent to:
(see the  of
for more details)
I would love to know why this sort of bug cannot happen with
setImmediate and why browsers won't be eventually forced to implement
the exact same mitigation, making setImmediate(f) effectively an
equivalent of setTimeout(f, 0).
Adam Barth made an equivalent argument in the Chromium thread 
> Microtasks are not sufficient for this purpose because they do not yield to the UI. If you ran the above demo with microtasks, the screen would update all at once, instead of smoothly showing the shorting progress.
>> I'm having a hard time understanding "before the browser renders again". I'm afraid this is a asking for laggy UIs if there is the least bug.
> This attitude forces us to maintain user-space trampolines inside microtasks. "Before the browser renders again" microtasks are no more dangerous than `while` loops, which are indeed what user-space trampolines are forced to defer to. There will always be people who abuse `while` loops; those same people will abuse microtasks. Trying to protect us from ourselves by not giving (easy! non-MutationObserver!) microtasks is a poor strategy.
This is not a "Trying to protect us from ourselves" situation. This is a
"browser trying to protect users from any sort of abuse" situation. For
while loops, they implemented the "script takes too long" dialog. For
mistakenly infinitely nested too short setTimeouts, they implemented 4ms
If browsers can't have mitigation strategies when features are abused,
we will run in the same situations than before.
As a JS dev, I want the same features than you. Now, how do browsers
make sure this doesn't drain users battery in case of misuse? (I don't
have an answer yet)
>> Kyle Simpson wrote:
>>> Promises implementations necessarily have to insert a defer/delay
>>> between each step of a sequence, even if all the steps of that
>>> sequence are already fulfilled and would otherwise, if not wrapped in
>>> promises, execute synchronously. The async "delay" between each step
>>> is necessary to create a predictable execution order between sync and
>>> async usage.
>> An implementation can keep track of the order in which messages arrived to
>> the queue and process them in order. No need to impose a delay, no?
> Yes, this is what I mean by maintaining a user-space trampoline. It is a *not*-insigificant amount of code to do correctly, especially with correct error semantics (i.e., disallow throwing tasks from interfering with future tasks, and re-throw their errors in order in such a way to reach window.onerror). It would be much easier if the browser maintained this queue for us
That's what I suggested ("the implementation keeps track..."), isn't it?
Do we disagree?
> and we could simply do `window.asap(myTask); window.asap(anotherTask);`. Here "`window.asap`" is a hypothetical pure microtask queue-er, distinct from `setImmediate`'s macro-task queueing (and presumably without all the `eval`-if-string and arguments-passing stuff).
I agree and I want "window.asap" asap. But I have the same question
about misuse and battery. We need to tell implementors how they mitigate
misuses. Otherwise, they'll just fallback to clamping as they did with
More information about the es-discuss