So, I wrote some code to put places.sqlite data into Mentat

Nicholas Alexander nalexander at
Tue Apr 17 16:50:37 UTC 2018


On Tue, Apr 17, 2018 at 1:18 AM, Thom Chiovoloni <tchiovoloni at>

> In my 1:1 with Mark I talked about how I was a little concerned that
> (AFAIK) we hadn't tried to run Mentat at a scale equivalent to a user's
> Places database (I had also mentioned this a few times to various others).
> Eventually, if mentat is to actually be the way forward for synced storage
> in Firefox, this seems necessary, and it would be good to find problems
> sooner and not later.
> I wrote some hacky code to stuff a representative set of the
> `places.sqlite` data (the relevant-looking fields in your visit and place
> info) into the mentat DB this evening. You can find the code here
>, and if you have a rust dev
> environment setup it should be usable. The README explains somewhat how to
> use it at the start.
> I wrote a lot more details in the README, but the TLDR is that my findings
> were mixed. The performance is not very good, but it also wasn't so bad as
> to be unusable on a good machine. That said, I accidentally destroyed my
> places.sqlite while writing this code though, so I only had a subset of my
> history to test on.
> Caveat 1: The code is hacky and doesn't handle errors robustly. It may or
> may not work for you. Ask me on IRC, file a bug, or reply if it doesn't.
> That said, it's connection to places is readonly, so your history/bookmarks
> shouldn't be at any risk.
> Caveat 2: I did this mostly out of curiosity. I also likely don't really
> know how to use mentat properly, so they should be taken with a grain of
> salt. Someone who knew what they were doing would probably get better
> numbers, but IDK how much better. I'm skeptical it would compare favorably
> to the current system, but it also might not need to.

Thanks for investigating this!  I did a very similar process to understand
write through-put in Datomish (to a first approximation).  (Datomish was
the prototype Clojure/ClojureScript implementation of the ideas we're
pushing forward in Mentat.)  For the curious,
is archived.

I used these tests to motivate the approach we're taking in the transactor
in Mentat, where we upsert in a series of rounds, each round being a bulk
SQLite operation.  When we stopped working on the Datomish implementation,
~90% of our execution time was spent by SQLite, which I took as
sufficiently little Datomish overhead to move on.  It might be that we're
spending more significant overhead in Mentat.

Of course, as rnewman points out, there are much faster ways to bulk import
into Mentat for production uses, but I will take a look at your experiment
and see if we can use it as a first perf test for Mentat.  In particular,
repeated import is a real stress test for the transactor: every single
datom we try to transact will be present in the store, so the question is
how fast can we recognize that and move on?

Thanks again for starting this!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the Sync-dev mailing list