[Go Faster] follow up from our l10n requirement discussion...

Toby Elliott telliott at mozilla.com
Thu Sep 3 18:17:02 UTC 2015

On Sep 3, 2015, at 9:48 AM, Axel Hecht <axel at mozilla.com> wrote:
> The question is not if, but where we want to get to.

Agreed. Part of my goal is to help coalesce some of these questions and figure out what's really important to us. Thanks for these thoughts, they help.

> The databases in pontoon and pootle today carry golden nuggets. And I can't discover them, because I don't have access, and I don't know the schemas.

This is not a problem with a database. If you can't discover or extract data, that's an API problem (and merits attention).

This is what I'm getting at: ideally, localization should not have to know the underlying technical specs. It shouldn't matter, just as it doesn't matter that I don't know the technical details that underly, say, github. If github decides tomorrow that they want to run everything off a MongoDB cluster, it shouldn't impact me the user (well, that particular decision might have some visible effects!) Changing all that out doesn't affect the openness of the data.

It may be that this is a case where the API level is, literally, hg. If the consensus is that that's the case, so be it. But that's a very strong constraint to put on a system this early on in discussions.

> It's great to make tools great at what they're great at. Restricting us to just one thing is in the way of progress.

Yes and no. This is what I was getting at with my python/perl analogy. There are lots of benefits to that - programmers get to work in what's comfortable, and get to decide what's the perfect tool for a particular job. But, there are also downsides. Getting new people on board is harder. Making changes to the underlying system involves a whole lot more work (just as we're seeing here). If your one guy who wrote Haskell leaves, you may be in serious trouble.

It may be that letting each localizer set up their own workflow and build their own tools is the ur-goal and that we're willing to make the sacrifices and expend the additional effort to make that happen. Note that I am not advocating for one-true-way. I believe that (some) other ways should be possible, but may require extra effort on the part of that user.

>> "Storage database" is the agnostic representation of hg. We can call out hg explicitly if you prefer.
> I find "agnostic representation" to be a bold claim. I can go into any particular detail of hg, and make that a requirement.
> Our version control and the text files within that are the one true data source. It's what we're using to build products.
> They're not a tangential artifact.

One of the points of go-faster is to question this statement. 

Is the technical implementation of our build process an essential piece of information for our localizers? 

Is "we use hg" a fundamental requirement for defining our build process?

Do localizations (along with various dictionaries and other pieces under discussion) need to exist as part of the central codebase? Can they be pulled in during the build process? Do they even need to be part of the build process, or can they be delivered separately?

I don't have the answers here (and, honestly, don't expect to until dtownsend has had the opportunity to dive into the guts of Firefox and figure out our options). 

> We don't have the resources to do a bunch of things, let alone doing things that may or may not work out. 

We should be taking risks and trying things that might fail. Sometimes we'll succeed and sometimes we'll learn something. 

Also, if we don't have the resources to do a bunch of things, why are we supporting a lot of different localization workflows? Is that the most important constraint in the system?


More information about the Gofaster mailing list