Invitation for technical discussion on next-generation Thunderbird

Ben Bucksch ben.bucksch at beonex.com
Sun Apr 23 01:59:34 UTC 2017


Hey Joshua,

thanks for this nice discussion. Some responses below.

Joshua Cranmer 🐧 wrote on 22.04.2017 07:51:
>>
>>   * type checks. Not there yet, I think. Could use TypeScript, but
>>     that has other drawbacks. Can emulate type checks with
>>     "assert(foo instanceof Foo);" at the start of public functions,
>>     as poor man's type checks. Hopefully we'll get them with later JS
>>     engines. => EMULATE
>>
> I don't think there's any proposal to add any sort of optional types 
> to JS itself. That means that we're going to need rely on linting 
> tools or extraneous compilers--I don't think relying on dynamic type 
> assertions are sufficient or decently motivating for adding strong 
> typing: you'll want some sort of static tool. The advantage of 
> Typescript is that it's reasonably standardized and well-supported, 
> the disadvantage is that it is going to lag behind in JS support and 
> it requires a transpilation step. 


I don't like compiling/transpiling JS. I like linting and optional 
compilers.

closure-compiler might be an option, because it's optional and could be 
applied only on release or as lint step.

Other suggestions?

>>   * modules. Missing in language. But we have require() from npm.
>>     Easy to use, works well, and gives us easy access to tons of code
>>     from npm. I don't see that a native module system would gain us a
>>     lot on top of that. If native modules come later, and it's
>>     better, and it's integrated with npm, we can still switch to it
>>     => EMULATE
>>
>
> There's a few subtly different kinds of module support (particularly 
> when async module loading might come into play). Something like 
> <https://github.com/umdjs/umd/tree/master/templates> ends up coming 
> up, and it would be very important, I think to standardize on the kind 
> of module wrapper that ends up getting used.

If we really need to change this later on, it's a little work, about 
10-30 minutes per source file, but the changes are limited to the start 
and the end of the JS file, and it doesn't change the code itself, so 
it's just inconvenient, but definitely possible to change to async 
require() later. I had to do that exact thing, when we ported a complex 
XUL extension to GChrome, and we write our own async require() 
implementation, and ported all JSMs to that. So, it's not dramatic.

npm require() is good enough for now, and very widely used. (See also below)

> The best guideline I've heard for writing tests is that the point of 
> tests is to fail--if you expect a test is not going to fail by a 
> plausible changeset, then you are wasting time with that test.

My guideline is: If you fix a bug, and the test fails, the test was 
wrong, because it encoded the bug.
OTOH, if you introduce a new bug (which is not an edge case), and the 
test still passes, the test was insufficient and should be adapted.

> You also want to avoid writing tests that will break if you do 
> something reasonable but minor (I am guilty of this); an example of 
> such a fragile test is test_attachment_intl.js, which I have 
> personally spent far more time fixing than it has identified bugs.

+1

> In particular, if you're fixing a bug, there should be an a priori 
> assumption that a test needs to be written for that fix (one that 
> fails before the patch and succeeds after it).

That's a different question. We won't be fixing bugs, but write a huge 
amount of new code, so the tradeoffs are completely different. It's far 
more important to get this finished and have real people test it in real 
world situations than to write 100% test coverage from the get-go. If we 
were running in the wrong direction, because we implemented features 
that the people don't need, or the whole project is a failure, we spent 
80% of our time on writing tests, the damage is 5 times higher.

We can always add more tests later. For the starter, I'd just write 
tests doing the same thing that I would do, if I tested it by clicking 
in the UI.

> The problem with integration tests is that, as a developer, I never 
> want to run them, since they're slow.

Yes. These integration tests that I mentioned would even require test 
accounts, so they would be hard to set up.

What I'd do:

  * Use github pull requests or Phabricator to do reviews.
  * Once approved, there's a button to "land". This triggers CI to:
      o merge current master into the branch
      o run the test suite, both unit and integration tests.
      o If they pass, the changeset gets merged to master, and the dev
        gets emailed.
      o If anything fails, the dev gets emailed with the error.


This way, only commits that pass all tests land on master, and the 
developer neither has to run the whole test suite, nor wait for them, 
nor is the tree ever red.

> Now, that said, I think the fakeservers are a mistake, in retrospect.

Glad you agree. IIRC, I implemented some of them (I don't remember what 
exactly), and that Socket.js and Auth.js and MIME.js that I implemented 
there was then later the basis for my IMAP and POP3 client implementation.

Doing that was fun. Yay, implementing an IMAP server in JS! But it was 
fairly obvious that this is not a viable long-term approach to 
re-implement all servers with all their quirks. Even if you fake it to 
some degree. I'm glad you agree on this.

>  think that establishing tests against a few open source server 
> implementations with common data setups in Docker containers is the 
> best way to test these components

Yup, that's a nice idea. But nothing I'd want to run on my machines. I'd 
want to trigger a "try" build and test server for that. Or just land and 
see whether it passes (see above).

>> TB:NG should be pure JS, with the exception of the OS integration you 
>> list below. If some protocol implementation does not exist yet, but 
>> it's possible, e.g. LDAP, we'll write it.
>>
>> For those things that absolutely do need native calls, because of 
>> e.g. Kerberos credentials, we might still not come to the IPC 
>> question at all, because there's already a JS wrapper on npm, e.g. 
>> https://www.npmjs.com/package/kerberos
>
> The thought I have for things like LDAP and GSSAPI is that there is 
> very little value supporting those on mobile, so if I can't get them 
> from an assumed-OS-installed library (e.g. OpenLDAP/WinLDAP), there is 
> little point in maintaining an implementation ourselves.

That's a nice perspective. Most users won't care about LDAP or Kerberos 
or S/MIME. OTOH, binary components are a major burden for build and 
release, because they have to be shipping for at least 3 platforms, they 
should be C++ compiled from source (no executables in the source repo, 
please!), which seriously ups the build dependencies and hinders new devs.

This gave me the idea that these should be extensions, and not shipped 
with core. They would be officially developed by the TB:NG project, and 
maintained, just not as part of core. If we have ext dependencies, we 
could even make an "enterprise" meta package.

The core should be pure JS, modulo platform APIs.

>> Essentially, there are already so many JS libraries on npm, and they 
>> give so much value, that at this point, I'd make npm support a 
>> requirement for any platform that we'll use. Including the platform 
>> APIs they use, e.g. node.js require("fs") and sockets etc. That's 
>> basically a requirement for all the already existing code.

>> Essentially, I don't want to re-do NSPR in JS. ...  Let's try to stay 
>> in sync with the existing JS community, and use the same platform 
>> APIs. That gives basis for collaboration, in both directions.
>

> I don't think there is a single "existing JS community"; the platforms 
> are just too different.

When I started prototyping for this project, I took another look at 
Electron and npm. I am amazed. It's very simple to use. And every useful 
non-GUI JS component is on npm. There is a lot of useful code on npm.

npm as JS component repository and node.js as runtime are really 
emerging as *the* JS platform. Sure, there are others, but they are 
marginal in comparison. (I'm of course excluding the whole web sphere. 
They have another target and other problems.)

Give it a try.

I was uncertain before, puzzled about which module system or runtime to 
use. After I tried npm and node.js, I think that's where things are 
going to go.

(If you didn't know: Netflix and Paypal rewrote their servers in 
node.js, and found that it's faster than Java, both in development time 
(Paypal tested this) and execution time (Netflix tested this). Both 
companies are now running on node.js in production.)

> node's function (err, result) callback API instead of modern Promises, 
> which is just begging to swallow errors).

function(err, result) is just stupid, agree. Even without Promises, it's 
still a stupid API. I'd hope we could make either a wrapper for that, or 
a plugger that converts that to a Promise.


>>
>>>   * Database (I think we need ACID-compliant database, and I don't
>>>     think we want to roll our own here. Note too that "load it all
>>>     into memory" is not going to satisfy performance requirements
>>>     when dealing with large folders).
>>>
>>
>> Again, let's cross that bridge when we get there. For starters, I 
>> just need preferences, mbox, and JSON. That gets me most of my way.
>
> Already in Thunderbird, we're well aware that a database API can be 
> difficult or impossible to change if you get it wrong. And there's 
> already one respect in which we know it's wrong--it takes too long to 
> open a database.

OK. You are right that we need to face this question, at some point. And 
I don't know what the right answer is here. You probably know better 
than I do.

My point is just:
* There might not be one single solution for all DB needs in TB:NG. It 
may differ from case to case, so we should evaluate it with specific 
cases where we need DBs and why and what the requirements are. You 
probably have some cases in mind, so why don't you just dump your brain 
somewhere?
* Now is probably not the right time to make a project-wide decision on 
which DB to use.

> What I am highly concerned about in your statement is that you're 
> blithely assuming that something simple is going to scale up to the 
> end goals you want to scale it up to, and you're going to bake that 
> design into your API, and you're going to be faced with a years-long 
> refactoring attempt to try to fix that bad design.

I see. That's a very valid concern. I can calm you there, because I 
think the API that I have in my prototype already takes care of that. 
First off, my prototype has no local storage at all and needs to fetch 
everything from the IMAP server on every load, so all APIs are already 
async.

More importantly, "JS Collections" 
https://github.com/benbucksch/jscollection/ are going to be one 
cornerstone of the API. The list of accounts, of folders, and of 
messages, are all such collections. These collections are observable, so 
there is not even a need for an explicit async API, but it comes for 
free. The IMAP implementation returns e.g. a collection for the message 
list of a given folder. Initially, the list may be empty, and as the 
messages are read, the observers are called and the messages are shown. 
The UI (specifically, <fastlist>) uses the same Collection as input and 
hooks up the observers automatically. So, the entire application code 
for async (!) loading of messages is: 
messageList.showCollection(folder.messages);
That's an almost literal quote from the actual prototype code.

A local mail storage is obviously needed even for IMAP, and would not 
change the folder API, just insert essentially insert a cache, on the 
implementation level. I'm thinking of some sort of DB or JSON for the 
headers and mbox or maildir for the content.

This is more detail about APIs and implementation that we should discuss 
on this thread, or even at this time. But your question was valid and 
important, so I wanted to answer. I hope this removes your concern.

Ben
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/tb-planning/attachments/20170423/35285024/attachment-0001.html>


More information about the tb-planning mailing list