A point of reference
Joshua Cranmer 🐧
pidgeot18 at gmail.com
Wed Mar 7 20:11:44 UTC 2018
On 3/7/2018 1:16 PM, Ben Bucksch wrote:
> R Kent James wrote on 07.03.18 17:13:
>> most serious users of the Electron framework are finding it
>> inadequate from a performance point of view
> So, what's the bottleneck?
> Raw processing power?
> Data communication between backend and frontend slow?
> Inefficient data structures?
> Many tests show that the JS engine itself is very fast and on par with
> C code, so blanket statements are not helpful.
As someone who works on compilers, I will say this: it's very easy to
make benchmarks that will say whatever you want them to. The actual
metric of which language will offer better performance in terms of
idiomatic coding styles is a rather more difficult question to answer
and is, to some degree, even ill-specified as a question.
If you look at the recent trend towards asm.js and WebAssembly (which
has participation from every major JS vendor at this point), that is a
clear admission that JS is not sufficient for performance in several
situations. One of the issues that crops up with JS is that getting good
performance requires tuning to the VM so to speak (e.g., forcing all of
your callsites to be monomorphic to avoid JIT bailouts). There was a
recent series of blog posts doing a back-and-forth about implementing
source maps in WebAssembly via Rust versus native JS, and the Rust
implementation was more performant the last I heard and arguably far
more readable in terms of source code.
For Atom, one thing they did call out was that the DOM and CSS
performance was way too slow for their needs; as I understand XRay, the
plan is to render the text directly via WebGL and bypass the DOM
datastructure. While this is admittedly an edge case, it's also telling
that the performance is not up to par on what should be the fastest part
of JS: interacting with a tuned native library via specifically-tuned
FFI hooks. It's not a case of "I need to maximize ALU and/or memory
usage" that's identified as being slow, nor is it a heavy reliance on
FFI to other systems that might be unoptimized (such as xpconnect).
There is some reason to believe that Thunderbird would occupy the "sour
spot" of performance: one of the key throughput limiters involves having
to work through text of mixed charsets with a mixture of string and
binary processing. I've held out hope in the past that maybe we could
see better performance if we made the database convert to Unicode much
earlier and maybe we didn't ping-pong the FFI so much, but the
demonstrated performance so far has not been encouraging.
> Today, most webapps are incredibly slow. But they are slow, because
> the developers are loading MB large JS libraries to do some
> animations. Of course that has to be slow.
> Without identifying the exact cause of a performance problem, you
> can't fix it.
So you're saying you've specifically measured the performance of these
apps and found that the loading of the large JS libraries was the
primary cause of the slowdown, and that you were able to eliminate the
use of those libraries, not retard functionality, and achieve
significantly better performance? Or are you just looking at the size of
those libraries and saying "gee, that must be why you're slow" and
moving on? I do agree that you need to measure performance to understand
what the issues are and how to improve it, but it is possible to make
general comments about performance and have some idea about how things
There are some reasons to doubt the performance of JS implementations,
not least of which is demonstrated lack-of-performance in some recent
rewrites. I think there is enough reason that we should make an honest
and earnest attempt to actually assess what the performance would be
using JS versus using another implementation (such as Rust), and whether
a JS implementation would be sufficient.
Thunderbird and DXR developer
Source code archæologist
More information about the tb-planning