graydon at mozilla.com
Mon Jun 26 10:49:40 PDT 2006
John Cowan wrote:
> I believe that the facilities of PEP 342, while necessary, is
> insufficient, as it does not allow subroutines invoked by a coroutine
> to yield for it, where some of the subroutines on the dynamic chain are
> coroutine-blind (or if it does, it's too subtle for me to see how).
I've stared at PEP 342 for an hour now and cannot exactly tell.
It clearly points out this problem in the second paragraph of its
Also, generators cannot yield control while other functions are
executing, unless those functions are themselves expressed as
generators, and the outer generator is written to yield in response
to values yielded by the inner generator.
I *think* the proposed solution is in the 3rd paragraph:
a simple co-routine scheduler or "trampoline function" would
let coroutines "call" each other without blocking
But I'm having a hard time picturing the meaning of that, and how it
addresses the problem. I think it means that the problem is not going to
be addressed directly, but indirectly. Let's work through an example,
say a network server:
s = socket.accept()
req = http_read_requests(s)
f = filesystem.load_file(req.filename)
buf = s.readline()
Suppose we want this to yield any time it does something that might
block on i/o, so inside the OS-level accept, read, load, and write
methods. How does PEP 342 recommend we rewrite this?
I *think* it says that you must still structure all the functions
containing generators *as* generators, but that the yields you sprinkle
all over the intermediate calls can have yield-expression results fed
back into them by an outer "trampoline" function. So I think it says we
rewrite as such:
s = yield socket.accept()
req = yield http_read_requests(s)
f = yield filesystem.load_file(req.filename)
buf = yield s.readline()
Or something; I surely am getting the notation they have in mind wrong.
But I think the idea is that there's to be an outer function that does
something like this:
x = http_service_loop()
y = x.send(None)
y = x.send(y)
stepping the coroutine through its work by acting as a sort of auxiliary
return slot. And this would let you -- with some more code -- similarly
multiplex N service loops together, keeping track of the next value to
feed back into each as it's re-scheduled (putting aside the issue of a
call to sleep-until-one-of-these-io-channels-has-an-event).
If this is what PEP 342 is proposing, then I must admit the lua strategy
seems much more appealing: make any "yield" expression return control to
the nearest dynamic "resume". That would let the low level i/o functions
know about yield points, and all the logic inbetween the scheduler and
the i/o functions ignore them.
So, follow-on question: what's *wrong* with the lua strategy? Moreover,
why did the python strategy turn out this way? Did the python group just
not understand the better strategy? Were they concerned about the
restriction of being unable to yield through C stack frames? That seems
unlikely since the same restriction probably applies to PEP 324 yields.
Maybe they were bound by semi-compatibility with the existing (and even
weaker) iterator/generator scheme in earlier python versions?
More information about the Es4-discuss