Invitation for technical discussion on next-generation Thunderbird
Joshua Cranmer 🐧
pidgeot18 at gmail.com
Sat Apr 22 05:51:25 UTC 2017
On 4/21/17 10:45 PM, Ben Bucksch wrote:
>
> * Gecko (mostly for historical reasons)
> JS in Gecko is obviously fairly advanced
>
I distinguish between two different subsets of Gecko: xpcshell and web
app (i.e., do you get window as your global or do you get Components on
the global?). For backend code, the distinction matters.
>
> * type checks. Not there yet, I think. Could use TypeScript, but
> that has other drawbacks. Can emulate type checks with "assert(foo
> instanceof Foo);" at the start of public functions, as poor man's
> type checks. Hopefully we'll get them with later JS engines. =>
> EMULATE
>
I don't think there's any proposal to add any sort of optional types to
JS itself. That means that we're going to need rely on linting tools or
extraneous compilers--I don't think relying on dynamic type assertions
are sufficient or decently motivating for adding strong typing: you'll
want some sort of static tool. The advantage of Typescript is that it's
reasonably standardized and well-supported, the disadvantage is that it
is going to lag behind in JS support and it requires a transpilation step.
>
> * modules. Missing in language. But we have require() from npm. Easy
> to use, works well, and gives us easy access to tons of code from
> npm. I don't see that a native module system would gain us a lot
> on top of that. If native modules come later, and it's better, and
> it's integrated with npm, we can still switch to it => EMULATE
>
There's a few subtly different kinds of module support (particularly
when async module loading might come into play). Something like
<https://github.com/umdjs/umd/tree/master/templates> ends up coming up,
and it would be very important, I think to standardize on the kind of
module wrapper that ends up getting used.
>
>
> I agree we need that. But with measure. You can easily spend 40-80% of
> the coding time on tests alone. I know I have, in some projects.
> Writing tests should not need more than 20% of coding time, or
> actually 0% more (see below). To achieve that, the test suite needs to
> be very comfortable to work with.
>
> My measure would be: If you want to test your code, instead of opening
> the UI in every coding iterating, write a little test function that
> calls your function, and run that. Essentially, you save yourself the
> time of clicking through the UI every time, and instead use that same
> time to write the test. But no more. You do not write tests just for
> the sake of tests.
>
> I can't emphasize enough how important it is to get the right measure
> here. Wrong emphasis, and the project takes 5 times as long, i.e. 15
> years instead of 3 years. I've seen that happen. If we do that, we
> won't finish.
The best guideline I've heard for writing tests is that the point of
tests is to fail--if you expect a test is not going to fail by a
plausible changeset, then you are wasting time with that test. You also
want to avoid writing tests that will break if you do something
reasonable but minor (I am guilty of this); an example of such a fragile
test is test_attachment_intl.js, which I have personally spent far more
time fixing than it has identified bugs. One remedy for the latter is
spending time to actually curate your test suite and give developers
better test assertions than "test thing A is equal to thing B."
While I agree that achieving 100% test coverage is a fools' errand for
the most part, I think there is a high value of making sure that test
coverage is reasonably complete. In particular, if you're fixing a bug,
there should be an a priori assumption that a test needs to be written
for that fix (one that fails before the patch and succeeds after it).
> Yes. Many projects are inherently a complete system, e.g.
> client/server. Ours is not - we're an email *client*. We interact with
> servers not under our control, but our users totally depend on the
> functioning of our client with these servers. We cannot allow even one
> day of gmail.com IMAP not working. Right now, we rely on anecdotal bug
> reports, and it takes time until we realize that the server changed.
> That needs to change.
>
> I'd like an "integration" test suite that tests our components (not
> UI), but with real-world servers. We'll set up test accounts at all
> ISPs that have more than 1% market share among our users. Then we
> regularly run tests (at least every 6 hours or so, and after every
> commit) on all of these accounts, whether log in works, whether I get
> notified of new mail (IMAP IDLE/push), I can fetch emails, send
> emails, copy mails etc.. If that fails without code change, we know
> immediately something changed at e.g. Gmail. If that fails after a
> code change, we know we broke e.g. Yahoo.
>
> I think this high-level integration test is far more important for
> Thunderbird users than classic unit tests on artificial local test
> servers. We'd test what the users actually use. We'd implicitly test
> all the code that's actually run when users use TB. And it's much
> faster to write, because one high-level function triggers a lot of
> low-level code.
The problem with integration tests is that, as a developer, I never want
to run them, since they're slow. Their value for doing quick sanity
checks during many types of bug fixing is then rather minimal,
particularly in comparison with low-level protocol unit tests. The
further problem with the kinds of tests you're proposing is that they
are completely unusable for developers. Now having automated tests
against real servers can be useful, but they can't be the primary tests
that we rely on for functionality.
Now, that said, I think the fakeservers are a mistake, in retrospect.
The original goal was to be able to easily and quickly mimic certain
server setups to test specific scenarios, but the amount of
implementation work to be able to get them to that point is simply
staggering. We have better tools now to let people test against actual
server implementations even in a completely sandboxed environment. I
think that establishing tests against a few open source server
implementations with common data setups in Docker containers is the best
way to test these components (I even have a repository somewhere that
stuffs OpenLDAP and Dovecot in a docker container), combined with a
smattering of mock tests that involve explicitly feeding expected data
to test things that need really specific network conditions (we have had
a few bugs in the past that depended on "this literal ends on a specific
packet boundary," which isn't really testable without this kind of
test), probably with a very thin layer that encapsulates all the
necessary boilerplate for server login to avoid fragile tests.
> TB:NG should be pure JS, with the exception of the OS integration you
> list below. If some protocol implementation does not exist yet, but
> it's possible, e.g. LDAP, we'll write it.
>
> For those things that absolutely do need native calls, because of e.g.
> Kerberos credentials, we might still not come to the IPC question at
> all, because there's already a JS wrapper on npm, e.g.
> https://www.npmjs.com/package/kerberos
The thought I have for things like LDAP and GSSAPI is that there is very
little value supporting those on mobile, so if I can't get them from an
assumed-OS-installed library (e.g. OpenLDAP/WinLDAP), there is little
point in maintaining an implementation ourselves.
>
> Let's cross that bridge when we get there.
>
>> * Filesystem
>> * Sockets
>> * DNS
>>
>
> This is all integral part of the node.js/npm platform, e.g. require("fs").
>
> Remember that if we use existing npm libraries, they will already
> presume certain npm dependencies. E.g. if we use emailjs.js IMAP
> implementation, it already uses a certain socket module, likely the
> native node.js one. We should look out that they are somewhat
> consistent (not 13 different socket implementations by various
> libraries), but I don't see a point in rewriting library code to use
> our artificial wrapper.
>
> Essentially, there are already so many JS libraries on npm, and they
> give so much value, that at this point, I'd make npm support a
> requirement for any platform that we'll use. Including the platform
> APIs they use, e.g. node.js require("fs") and sockets etc. That's
> basically a requirement for all the already existing code.
>
> So, it's hard to make such decisions in isolation. We should first get
> our feet wet and make a prototype, and gain some experience with the
> new situation, and then we can derive coding standards and platform
> APIs from there.
>
> Essentially, I don't want to re-do NSPR in JS. That was probably
> Mozilla's biggest mistake, to write their own custom platform
> abstraction. It was a necessity back then, and maybe there was nothing
> better, but it led to the arcane platform it is today. Let's try to
> stay in sync with the existing JS community, and use the same platform
> APIs. That gives basis for collaboration, in both directions.
There is striking divergence between different platforms on important
APIs (like sockets). Sometimes, it's not possible to emulate APIs on all
platforms (e.g., node's fs library has sync operations, and synchronous
loads can't be emulated on some platforms). There is also the fact that
some APIs bake in older, stupider API paradigms (once again, node's
function (err, result) callback API instead of modern Promises, which is
just begging to swallow errors). I don't think there is a single
"existing JS community"; the platforms are just too different.
>
>> * Database (I think we need ACID-compliant database, and I don't
>> think we want to roll our own here. Note too that "load it all
>> into memory" is not going to satisfy performance requirements
>> when dealing with large folders).
>>
>
> Again, let's cross that bridge when we get there. For starters, I just
> need preferences, mbox, and JSON. That gets me most of my way.
Already in Thunderbird, we're well aware that a database API can be
difficult or impossible to change if you get it wrong. And there's
already one respect in which we know it's wrong--it takes too long to
open a database. One of the things I keep harping on about is the need
to set a goal for what a reasonable large-scale program should look
like, and I do that because you need to look at what it's going to imply
for your situation to design the APIs appropriately.
What I am highly concerned about in your statement is that you're
blithely assuming that something simple is going to scale up to the end
goals you want to scale it up to, and you're going to bake that design
into your API, and you're going to be faced with a years-long
refactoring attempt to try to fix that bad design (JS is naturally
harder to refactor than C++). My simple calculations suggest that you're
looking at a few hundred MB of data (assuming no string interning, and
not including indexes) in a large folder (I'm using 1M as my design
criterion), and the read speeds on disks can't load that into memory
fast enough. This means you can't assume your folder is going to be
in-memory (we already know that from today's Mork database), which means
every database API has to be assumed to be asynchronous, which has very
important ramifications for the entire rest of the codebase. And any
time you need something more complex than read/writeFileAtomic, well,
you're probably going to want someone else to write that database for you.
--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/tb-planning/attachments/20170422/21153f4c/attachment-0001.html>
More information about the tb-planning
mailing list