New I/O monitoring framework

Robert O'Callahan robert at
Tue Jan 13 21:04:32 PST 2015

Priorly, we echoed output to stdout/stderr during replay with dedicated
hooks in the replay-syscall code for 'write'/'writev'. We implemented -M
annotations by 'write'/'writev' in a similar way. This had a couple of
-- During recording with syscallbuf enabled, we didn't annotate buffered
-- During replay, we'd echo output to fds 1 and 2 from every process, even
when those fds had been redirected to /dev/null or wherever. Incredibly
spammy when, for example, you replay "./mach mochitest-plain".
We also have the RR_MAGIC_SAVE_FD which uses some similar hooks.

I solved those problems by creating a framework to monitor writes to
selected files. Each Task gets a (possibly shared) FdTable (mirroring the
kernel's shareable fd tables). Each FdTable contains FileMonitor entries
for a subset of the task's open fds. Multiple fds can share the same
FileMonitor. FileMonitor receives will_write and did_write callbacks during
both recording and replay. Syscallbuf is disabled for fds with a
FileMonitor. (There is an array in the preload library, maintained by rr,
tracking which fds have FileMonitors.) I created StdioMonitor to track
output to stdout/stderr, and MagicSaveDataMonitor to implement
RR_MAGIC_SAVE_FD. This got rid of a lot of ad-hoc code as well as fixing
the above bugs.

At we're going to need to monitor I/O on other file descriptors, e.g. to
track X rendering. Hopefully that can build on FileMonitor.

oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
osoaoyoso,o o‘oYooouo ofooooolo!o’o owoiololo oboeo oiono odoaonogoeoro
otohoeo ofoioroeo ooofo ohoeololo.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the rr-dev mailing list