<div dir="ltr"><div>I know we've had code that, instead of reading a pref directly, checks the pref once in an init() and uses pref observers to watch for any changes to it. (i.e., basically mirrors the pref into some module-local variable, at which point you can roll your own locking or whatever to make it threadsafe). Is that a pattern that would work here, if people really want OMT access but we're not ready to bake support for that into the pref service? [Perhaps with some simple helper glue / boilerplate to make it easier.]</div><div><br></div><div>Justin<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione <span dir="ltr"><<a href="mailto:kmaglione@mozilla.com" target="_blank">kmaglione@mozilla.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
We should totally be able to afford the very low cost of a<br>
rarely-contended lock. What's going on that causes uncached pref reads<br>
to show up so hot in profiles? Do we have a list of problematic pref<br>
keys?<br>
</blockquote>
<br></span>
So, at the moment, we read about 10,000 preferences at startup in debug builds. That number is probably slightly lower in non-debug builds, bug we don't collect stats there. We're working on reducing that number (which is why we collect statistics in the first place), but for now, it's still quite high.<br>
<br>
<br>
As for the cost of locks... On my machine, in a tight loop, the cost of a entering and exiting MutexAutoLock is about 37ns. This is pretty close to ideal circumstances, on a single core of a very fast CPU, with very fast RAM, everything cached, and no contention. If we could extrapolate that to normal usage, it would be about a third of a ms of additional overhead for startup. I've fought hard enough for 1ms startup time improvements, but *shrug*, if it were that simple, it might be acceptable.<br>
<br>
But I have no reason to think the lock would be rarely contended. We read preferences *a lot*, and if we allowed access from background threads, I have no doubt that we would start reading them a lot from background threads in addition to reading them a lot from the main thread.<br>
<br>
And that would mean, in addition to lock contention, cache contention and potentially even NUMA issues. Those last two apply to atomic var caches too, but at least they generally apply only to the specific var caches being accessed off-thread, rather than pref look-ups in general.<br>
<br>
<br>
Maybe we could get away with it at first, as long as off-thread usage remains low. But long term, I think it would be a performance foot-gun. And, paradoxically, the less foot-gunny it is, the less useful it probably is, too. If we're only using it off-thread in a few places, and don't have to worry about contention, why are we bothering with locking and off-thread access in the first place?<div class="HOEnZb"><div class="h5"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione <<a href="mailto:kmaglione@mozilla.com" target="_blank">kmaglione@mozilla.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
On 13/07/2018 21:37, Kris Maglione wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
tl;dr: A major change to the architecture preference service has just<br>
landed, so please be on the lookout for regressions.<br>
<br>
We've been working for the last few weeks on rearchitecting the<br>
preference service to work better in our current and future multi-process<br>
configurations, and those changes have just landed in bug 1471025.<br>
</blockquote>
<br>
<br>
Looks like a great step forward!<br>
<br>
While we're thinking about the prefs service, is there any possibility we<br>
could enable off-main-thread access to preferences?<br>
</blockquote>
<br>
<br>
I think the chances of that are pretty close to 0, but I'll defer to Nick.<br>
<br>
We definitely can't afford the locking overhead—preference look-ups already<br>
show up in profiles without it. And even the current limited exception that<br>
we grant Stylo while it has the main thread blocked causes problems (bug<br>
1474789), since it makes it impossible to update statistics for those reads,<br>
or switch to Robin Hood hashing (which would make our hash tables much<br>
smaller and more efficient, but requires read operations to be able to move<br>
entries).<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I am aware that in simple cases, this can be achieved via the<br>
StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value<br>
from other threads. But this doesn't help in my use case, where I need<br>
another thread to be able to query an extensible set of pref names that are<br>
not fully known at compile time.<br>
<br>
Currently, it looks like to do this, I'll have to iterate over the<br>
relevant prefs branch(es) ahead of time (on the main thread) and copy all<br>
the entries to some other place that is then available to my worker threads.<br>
For my use case, at least, the other threads only need read access;<br>
modifying prefs could still be limited to the main thread.<br>
</blockquote>
<br>
<br>
That's probably your best option, yeah. Although I will say that those kinds<br>
of extensible preference sets aren't great for performance or memory usage,<br>
so switching to some other model might be better.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Possible? Or would the overhead of locking be too crippling?<br>
</blockquote>
<br>
<br>
The latter, I'm afraid.<br>
<br>
______________________________<wbr>_________________<br>
dev-platform mailing list<br>
<a href="mailto:dev-platform@lists.mozilla.org" target="_blank">dev-platform@lists.mozilla.org</a><br>
<a href="https://lists.mozilla.org/listinfo/dev-platform" rel="noreferrer" target="_blank">https://lists.mozilla.org/list<wbr>info/dev-platform</a><br>
</blockquote></blockquote>
<br></div></div><span class="HOEnZb"><font color="#888888">
-- <br>
Kris Maglione<br>
Senior Firefox Add-ons Engineer<br>
Mozilla Corporation<br>
<br>
On two occasions I have been asked, "Pray, Mr. Babbage, if you put<br>
into the machine wrong figures, will the right answers come out?" I am<br>
not able rightly to apprehend the kind of confusion of ideas that<br>
could provoke such a question.<br>
--Charles Babbage</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
______________________________<wbr>_________________<br>
firefox-dev mailing list<br>
<a href="mailto:firefox-dev@mozilla.org" target="_blank">firefox-dev@mozilla.org</a><br>
<a href="https://mail.mozilla.org/listinfo/firefox-dev" rel="noreferrer" target="_blank">https://mail.mozilla.org/listi<wbr>nfo/firefox-dev</a><br>
</div></div></blockquote></div><br></div>