[Go Faster] follow up from our l10n requirement discussion...

Chris Hofmann chofmann at mozilla.com
Fri Sep 4 22:00:00 UTC 2015


On Thu, Sep 3, 2015 at 11:17 AM, Toby Elliott <telliott at mozilla.com> wrote:

>
> On Sep 3, 2015, at 9:48 AM, Axel Hecht <axel at mozilla.com> wrote:
> > The question is not if, but where we want to get to.
>
> Agreed. Part of my goal is to help coalesce some of these questions and
> figure out what's really important to us. Thanks for these thoughts, they
> help.
>
>
> > The databases in pontoon and pootle today carry golden nuggets. And I
> can't discover them, because I don't have access, and I don't know the
> schemas.
>
> This is not a problem with a database. If you can't discover or extract
> data, that's an API problem (and merits attention).
>

We should just agree to disagree on this one then.  Its not our
experience.  Both the APIs can't keep up with the access to the underlying
data that we want to dig out, and when the schema actually does need to
change the orginal developers of the scheme run away, or the system breaks
down with performance and other problems trying to access the data in ways
that weren't intended or designed for.


>
> This is what I'm getting at: ideally, localization should not have to know
> the underlying technical specs. It shouldn't matter, just as it doesn't
> matter that I don't know the technical details that underly, say, github.
> If github decides tomorrow that they want to run everything off a MongoDB
> cluster, it shouldn't impact me the user (well, that particular decision
> might have some visible effects!) Changing all that out doesn't affect the
> openness of the data.
>
> It may be that this is a case where the API level is, literally, hg. If
> the consensus is that that's the case, so be it. But that's a very strong
> constraint to put on a system this early on in discussions.
>
>
I think you are mixing requirements for different sets of the people this
system will serve.

yes, localizers might not want to know about or care about where the
localization strings come from or go.  but in some cases the most active
sets of localizers do.  the work like developers with hg and text editors.
so it does matter to them if you change version control systems.   all the
abstraction just gets in the way of their work.

same for release and localization managers that are also responsible for
the end to end system.   its not only about getting the strings translated,
its about getting them into builds, and out for testing...


>
> > It's great to make tools great at what they're great at. Restricting us
> to just one thing is in the way of progress.
>
> Yes and no. This is what I was getting at with my python/perl analogy.
> There are lots of benefits to that - programmers get to work in what's
> comfortable, and get to decide what's the perfect tool for a particular
> job. But, there are also downsides. Getting new people on board is harder.
> Making changes to the underlying system involves a whole lot more work
> (just as we're seeing here). If your one guy who wrote Haskell leaves, you
> may be in serious trouble.
>

most of the new people that we have coming into the project are coming from
the linux world. they are used to and work just fine with pootle and the
capabilities we have there.   we have contracts with the pootle developers
to keep things operationally sound.  this makes up about 1/3 or more of our
contributors.  the other 1/3 are on hg and text editors..  that really
doesn't require us to do much direct work to help them be more effecient.


>
> It may be that letting each localizer set up their own workflow and build
> their own tools is the ur-goal and that we're willing to make the
> sacrifices and expend the additional effort to make that happen. Note that
> I am not advocating for one-true-way. I believe that (some) other ways
> should be possible, but may require extra effort on the part of that user.
>
>
Its really not each localizer setting up there own workflow.  they have a
small set of basic choices to choose from, and we experiement with new
choices from time to time.  we also shut down obsolete choices and have
done so with narro, transafex and others.    so we are managing this...


>
> >> "Storage database" is the agnostic representation of hg. We can call
> out hg explicitly if you prefer.
> > I find "agnostic representation" to be a bold claim. I can go into any
> particular detail of hg, and make that a requirement.
> >
> > Our version control and the text files within that are the one true data
> source. It's what we're using to build products.
> >
> > They're not a tangential artifact.
>
> One of the points of go-faster is to question this statement.
>
> Is the technical implementation of our build process an essential piece of
> information for our localizers?
>

getting builds out of the system so they can view context of newly landed
features and en-US strings is, and getting more builds out of the system to
show the results of their translation work are both essential.   minimizing
turn around time for putting all these things together is essential.   if
we are planning a system addon and update system where localizers and
testers are basically in situation of having to roll their own to assemble
a bunch of things from here and there we won't be meeting the needs of
trying to go faster.  localizers and testers do just fine if we deliver
full builds on a regular schedule with all the things they need to look
at.  when those builds break, or when lots of assembly is required things
start to come to a standstill.   what is that you are suggesting we do to
the build system.  will it stand up to the needs I just mentioned?


>
> Is "we use hg" a fundamental requirement for defining our build process?
>
>
not sure where this is coming from or where its going.

yes, is the answer in the short term.   if 1000's of hours are invested
then other options might be possible.



> Do localizations (along with various dictionaries and other pieces under
> discussion) need to exist as part of the central codebase? Can they be
> pulled in during the build process? Do they even need to be part of the
> build process, or can they be delivered separately?
>
>
Maybe they could.   But the question gets back to how efficient might that
be towards getting to a globally released and distributed product?
That's the main goal in reaching the bulk of our user base as fast and
effectively as possible.

You also introduce lots of complexity by shifting to some kind of
asynchronous build and distribution system for a single product (the
browser).  With complexity comes errors and extra decision making and all
that slows us down.



> I don't have the answers here (and, honestly, don't expect to until
> dtownsend has had the opportunity to dive into the guts of Firefox and
> figure out our options).
>
>
so what happens next?  just waiting for dtownsend?


> > We don't have the resources to do a bunch of things, let alone doing
> things that may or may not work out.
>
> We should be taking risks and trying things that might fail. Sometimes
> we'll succeed and sometimes we'll learn something.
>
> Also, if we don't have the resources to do a bunch of things, why are we
> supporting a lot of different localization workflows? Is that the most
> important constraint in the system?
>

As mentioned above.  We don't support "a lot' of localization workflows.
We support a few that work for different localizers with different levels
of technical skill and background.  for those with advanced skills hg and
editors (and not much work invovevd there).   for those that want/need a
web interface and might be coming from the linux localization world pootle
gets the job done.  then we keep data formats open and interchangable to a
variety of other pet tools that can work and translators are used too, but
we offer no support there.  Once in awhile we invest in some
experimentation on new system that have promise.  If it doesn't work out
and the developers loose interest or go away or can't support us like with
narro and transafex then we shut those experiments down.   That's how we
see a way to experiment and innovate with tools to try things out.  That's
the camp we see your work with pontoon to be.   It has some promise, but
its got a lot of work ahead to be a good enough tool that it might replace
a host of requirements from a variety of wildly different kinds of
localization contributors, or maybe even just a minority of localizers.  I
think you are starting to get this picture, and I think you are starting to
understand the importance of making sure that what ever you are working on
can't be in the critical path for trying to get localization work done on
system addons.

It was great to have this requirement added to the PRD  [
https://docs.google.com/document/d/1H0be9rVfK-molaEa3KPvPLO4hS6iSKs-eHy2HGQwMNY/edit
]


   -

   System add-ons will be localized as per normal processes, as much as
   possible. Localization dashboards and tools will need to be updated to
   understand the location and cadence of system add-on strings. String
   changing updates will need to be added to a schedule and localizers will
   need to be notified.


-chofmann


>
> Toby
>
>
>
>
>
> _______________________________________________
> Gofaster mailing list
> Gofaster at mozilla.org
> https://mail.mozilla.org/listinfo/gofaster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/gofaster/attachments/20150904/99a2ca3b/attachment.html>


More information about the Gofaster mailing list