<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hey Joshua,<br>
<br>
thanks for this nice discussion. Some responses below.<br>
<br>
Joshua Cranmer 🐧 wrote on 22.04.2017 07:51:<br>
</div>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">
<blockquote type="cite"
cite="mid:63e1bef2-43c7-eff2-7b74-368d8f34a661@beonex.com">
<ul>
<li>type checks. Not there yet, I think. Could use TypeScript,
but that has other drawbacks. Can emulate type checks with
"assert(foo instanceof Foo);" at the start of public
functions, as poor man's type checks. Hopefully we'll get
them with later JS engines. => EMULATE</li>
</ul>
</blockquote>
I don't think there's any proposal to add any sort of optional
types to JS itself. That means that we're going to need rely on
linting tools or extraneous compilers--I don't think relying on
dynamic type assertions are sufficient or decently motivating for
adding strong typing: you'll want some sort of static tool. The
advantage of Typescript is that it's reasonably standardized and
well-supported, the disadvantage is that it is going to lag behind
in JS support and it requires a transpilation step. </blockquote>
<br>
<br>
I don't like compiling/transpiling JS. I like linting and optional
compilers.<br>
<br>
closure-compiler might be an option, because it's optional and could
be applied only on release or as lint step.<br>
<br>
Other suggestions?<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">
<blockquote type="cite"
cite="mid:63e1bef2-43c7-eff2-7b74-368d8f34a661@beonex.com">
<ul>
<li>modules. Missing in language. But we have require() from
npm. Easy to use, works well, and gives us easy access to
tons of code from npm. I don't see that a native module
system would gain us a lot on top of that. If native modules
come later, and it's better, and it's integrated with npm,
we can still switch to it => EMULATE</li>
</ul>
</blockquote>
<br>
There's a few subtly different kinds of module support
(particularly when async module loading might come into play).
Something like <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="https://github.com/umdjs/umd/tree/master/templates"><https://github.com/umdjs/umd/tree/master/templates></a>
ends up coming up, and it would be very important, I think to
standardize on the kind of module wrapper that ends up getting
used.<br>
</blockquote>
<br>
If we really need to change this later on, it's a little work, about
10-30 minutes per source file, but the changes are limited to the
start and the end of the JS file, and it doesn't change the code
itself, so it's just inconvenient, but definitely possible to change
to async require() later. I had to do that exact thing, when we
ported a complex XUL extension to GChrome, and we write our own
async require() implementation, and ported all JSMs to that. So,
it's not dramatic.<br>
<br>
npm require() is good enough for now, and very widely used. (See
also below)<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite"> The best guideline I've heard for writing tests is
that the point of tests is to fail--if you expect a test is not
going to fail by a plausible changeset, then you are wasting time
with that test.</blockquote>
<br>
My guideline is: If you fix a bug, and the test fails, the test was
wrong, because it encoded the bug.<br>
OTOH, if you introduce a new bug (which is not an edge case), and
the test still passes, the test was insufficient and should be
adapted.<br>
<br>
<blockquote type="cite">You also want to avoid writing tests that
will break if you do something reasonable but minor (I am guilty
of this); an example of such a fragile test is
test_attachment_intl.js, which I have personally spent far more
time fixing than it has identified bugs.</blockquote>
<br>
+1<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">In particular, if you're fixing a bug, there should be
an a priori assumption that a test needs to be written for that
fix (one that fails before the patch and succeeds after it).<br>
</blockquote>
<br>
That's a different question. We won't be fixing bugs, but write a
huge amount of new code, so the tradeoffs are completely different.
It's far more important to get this finished and have real people
test it in real world situations than to write 100% test coverage
from the get-go. If we were running in the wrong direction, because
we implemented features that the people don't need, or the whole
project is a failure, we spent 80% of our time on writing tests, the
damage is 5 times higher.<br>
<br>
We can always add more tests later. For the starter, I'd just write
tests doing the same thing that I would do, if I tested it by
clicking in the UI.<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite"> The problem with integration tests is that, as a
developer, I never want to run them, since they're slow.</blockquote>
<br>
Yes. These integration tests that I mentioned would even require
test accounts, so they would be hard to set up.<br>
<br>
What I'd do:<br>
<ul>
<li>Use github pull requests or Phabricator to do reviews.</li>
<li>Once approved, there's a button to "land". This triggers CI
to:</li>
<ul>
<li>merge current master into the branch</li>
<li>run the test suite, both unit and integration tests.</li>
<li>If they pass, the changeset gets merged to master, and the
dev gets emailed.</li>
<li>If anything fails, the dev gets emailed with the error.</li>
</ul>
</ul>
<br>
This way, only commits that pass all tests land on master, and the
developer neither has to run the whole test suite, nor wait for
them, nor is the tree ever red.<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">Now, that said, I think the fakeservers are a mistake,
in retrospect.</blockquote>
<br>
Glad you agree. IIRC, I implemented some of them (I don't remember
what exactly), and that Socket.js and Auth.js and MIME.js that I
implemented there was then later the basis for my IMAP and POP3
client implementation.<br>
<br>
Doing that was fun. Yay, implementing an IMAP server in JS! But it
was fairly obvious that this is not a viable long-term approach to
re-implement all servers with all their quirks. Even if you fake it
to some degree. I'm glad you agree on this.<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite"> think that establishing tests against a few open
source server implementations with common data setups in Docker
containers is the best way to test these components<br>
</blockquote>
<br>
Yup, that's a nice idea. But nothing I'd want to run on my machines.
I'd want to trigger a "try" build and test server for that. Or just
land and see whether it passes (see above).<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">
<blockquote type="cite"
cite="mid:63e1bef2-43c7-eff2-7b74-368d8f34a661@beonex.com">
TB:NG should be pure JS, with the exception of the OS
integration you list below. If some protocol implementation does
not exist yet, but it's possible, e.g. LDAP, we'll write it.<br>
<br>
For those things that absolutely do need native calls, because
of e.g. Kerberos credentials, we might still not come to the IPC
question at all, because there's already a JS wrapper on npm,
e.g. <a class="moz-txt-link-freetext"
href="https://www.npmjs.com/package/kerberos"
moz-do-not-send="true">https://www.npmjs.com/package/kerberos</a><br>
</blockquote>
<br>
The thought I have for things like LDAP and GSSAPI is that there
is very little value supporting those on mobile, so if I can't get
them from an assumed-OS-installed library (e.g. OpenLDAP/WinLDAP),
there is little point in maintaining an implementation ourselves.<br>
</blockquote>
<br>
That's a nice perspective. Most users won't care about LDAP or
Kerberos or S/MIME. OTOH, binary components are a major burden for
build and release, because they have to be shipping for at least 3
platforms, they should be C++ compiled from source (no executables
in the source repo, please!), which seriously ups the build
dependencies and hinders new devs.<br>
<br>
This gave me the idea that these should be extensions, and not
shipped with core. They would be officially developed by the TB:NG
project, and maintained, just not as part of core. If we have ext
dependencies, we could even make an "enterprise" meta package.<br>
<br>
The core should be pure JS, modulo platform APIs.<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">
<blockquote type="cite"
cite="mid:63e1bef2-43c7-eff2-7b74-368d8f34a661@beonex.com">
Essentially, there are already so many JS libraries on npm, and
they give so much value, that at this point, I'd make npm
support a requirement for any platform that we'll use. Including
the platform APIs they use, e.g. node.js require("fs") and
sockets etc. That's basically a requirement for all the already
existing code.<br>
</blockquote>
</blockquote>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">
<blockquote type="cite"
cite="mid:63e1bef2-43c7-eff2-7b74-368d8f34a661@beonex.com">
Essentially, I don't want to re-do NSPR in JS. ... Let's try to
stay in sync with the existing JS community, and use the same
platform APIs. That gives basis for collaboration, in both
directions.<br>
</blockquote>
<br>
</blockquote>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">I don't think there is a single "existing JS
community"; the platforms are just too different.<br>
</blockquote>
<br>
When I started prototyping for this project, I took another look at
Electron and npm. I am amazed. It's very simple to use. And every
useful non-GUI JS component is on npm. There is a lot of useful code
on npm.<br>
<br>
npm as JS component repository and node.js as runtime are really
emerging as *the* JS platform. Sure, there are others, but they are
marginal in comparison. (I'm of course excluding the whole web
sphere. They have another target and other problems.)<br>
<br>
Give it a try.<br>
<br>
I was uncertain before, puzzled about which module system or runtime
to use. After I tried npm and node.js, I think that's where things
are going to go.<br>
<br>
(If you didn't know: Netflix and Paypal rewrote their servers in
node.js, and found that it's faster than Java, both in development
time (Paypal tested this) and execution time (Netflix tested this).
Both companies are now running on node.js in production.)<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite"> node's function (err, result) callback API instead of
modern Promises, which is just begging to swallow errors).</blockquote>
<br>
function(err, result) is just stupid, agree. Even without Promises,
it's still a stupid API. I'd hope we could make either a wrapper for
that, or a plugger that converts that to a Promise.<br>
<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite">
<blockquote type="cite"
cite="mid:63e1bef2-43c7-eff2-7b74-368d8f34a661@beonex.com"> <br>
<blockquote
cite="mid:5a31e3ae-e621-94df-5253-30d986ac674c@gmail.com"
type="cite">
<ul>
<li>Database (I think we need ACID-compliant database, and I
don't think we want to roll our own here. Note too that
"load it all into memory" is not going to satisfy
performance requirements when dealing with large folders).<br>
</li>
</ul>
</blockquote>
<br>
Again, let's cross that bridge when we get there. For starters,
I just need preferences, mbox, and JSON. That gets me most of my
way.<br>
</blockquote>
<br>
Already in Thunderbird, we're well aware that a database API can
be difficult or impossible to change if you get it wrong. And
there's already one respect in which we know it's wrong--it takes
too long to open a database.<br>
</blockquote>
<br>
OK. You are right that we need to face this question, at some point.
And I don't know what the right answer is here. You probably know
better than I do.<br>
<br>
My point is just:<br>
* There might not be one single solution for all DB needs in TB:NG.
It may differ from case to case, so we should evaluate it with
specific cases where we need DBs and why and what the requirements
are. You probably have some cases in mind, so why don't you just
dump your brain somewhere?<br>
* Now is probably not the right time to make a project-wide decision
on which DB to use.<br>
<br>
<blockquote
cite="mid:242e6c07-4cff-f9b3-8664-ceebbae9becc@gmail.com"
type="cite"> What I am highly concerned about in your statement is
that you're blithely assuming that something simple is going to
scale up to the end goals you want to scale it up to, and you're
going to bake that design into your API, and you're going to be
faced with a years-long refactoring attempt to try to fix that bad
design.</blockquote>
<br>
I see. That's a very valid concern. I can calm you there, because I
think the API that I have in my prototype already takes care of
that. First off, my prototype has no local storage at all and needs
to fetch everything from the IMAP server on every load, so all APIs
are already async.<br>
<br>
More importantly, "JS Collections"
<a class="moz-txt-link-freetext" href="https://github.com/benbucksch/jscollection/">https://github.com/benbucksch/jscollection/</a> are going to be one
cornerstone of the API. The list of accounts, of folders, and of
messages, are all such collections. These collections are
observable, so there is not even a need for an explicit async API,
but it comes for free. The IMAP implementation returns e.g. a
collection for the message list of a given folder. Initially, the
list may be empty, and as the messages are read, the observers are
called and the messages are shown. The UI (specifically,
<fastlist>) uses the same Collection as input and hooks up the
observers automatically. So, the entire application code for async
(!) loading of messages is:
messageList.showCollection(folder.messages);<br>
That's an almost literal quote from the actual prototype code.<br>
<br>
A local mail storage is obviously needed even for IMAP, and would
not change the folder API, just insert essentially insert a cache,
on the implementation level. I'm thinking of some sort of DB or JSON
for the headers and mbox or maildir for the content.<br>
<br>
This is more detail about APIs and implementation that we should
discuss on this thread, or even at this time. But your question was
valid and important, so I wanted to answer. I hope this removes your
concern.<br>
<br>
Ben<br>
</body>
</html>