Profile Images Deployment

Sean McArthur smcarthur at mozilla.com
Tue Aug 5 08:38:36 PDT 2014


I can't say how many images. I imagine it will be related to how many sign
ups we get per second. Perhaps this is something that we can roll out in
stages, or perhaps we don't have THAT many sign ups.

Time to process will depend on the size of the image.

Perhaps we could process locally. Does that mean separate processes on the
same server? It might work at first, if load starts small. However, it
shouldn't prevent the profile server from being able to serve email/uid etc
as well. And as it grows, is it easy to scale that up?
On Aug 5, 2014 7:52 AM, "Benson Wong" <bwong at mozilla.com> wrote:

> How many images per second are you processing or anticipating?
>
> How long does it take to process each image?
>
> Can we get away with upload, queue locally, process, upload to S3?
>
> Sent from phone
>
> On Aug 4, 2014, at 3:45 PM, Sean McArthur <smcarthur at mozilla.com> wrote:
>
> Ok, so, the Profile service is going from a dumb proxy of emails to an
> image crunching and serving monster. Let's ready the monster cage!
>
> # Overview
>
> 1. FxA UI will POST some binary image data at an endpoint.
> 2. The webserver will generate a unique id, and pipe that payload straight
> into a private S3 bucket, with the name of the id. Once the upload to S3 is
> complete, the webserver will add the id to an SQS instance.
> 3. We will have an SQS instance.
> 4. A different worker server will pestering SQS for new images to crunch.
> 5. When it finds new messages in the queue, it will download them from the
> private S3 bucket, resize them if necessary, and reencode them to ensure
> they're plain images.
> 6. The worker will upload the crunched image to a public S3 bucket.
> 7. That image should now be reachable from
> https://firefoxusercontent.domain/a/id.
>
> # Details
>
> - S3 - I'm assuming we'll need 2 buckets, 1 private for dumping "unsafe"
> images, and a 2nd public that can server image files through a URL.
>
> - SQS - Will need 1 so that webheads can alert workers when there are new
> images to crunch.
>
> - Servers - 2 kinds of servers: the web server runs in `bin/server.js`,
> and the worker runs in `bin/images.js`. They don't need direct access to
> each other; they will communicate through SQS.
>
> - Image Proxy - The thing to make the public S3 bucket have saneish URLs.
>
> - Domains - The web servers should be accessible from
> profile.account.firefox.com. The images should be a different
> second-level domain. It may be that we already have a domain for this,
> since AMO and Marketplace have been serving user images for years.
>
> # ¯\_(ツ)_/¯
>
> - Building this in stage/dev?
> I imagine stage would be like build in prod, but dev tends to use
> dannyboxes. I'm not sure we'll have access to S3 and SQS and whatnot, so I
> have been writing a `local` aws-driver, which just uses a child process and
> IPC to crunch images, and an additional route to server them from.
>
> - Testing this in stage/dev?
> With a dannybox using a `local` driver, testing on dev won't exercise the
> whole stack. I guess that's what testing in stage is for?
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/dev-fxacct/attachments/20140805/e2d036ef/attachment.html>


More information about the Dev-fxacct mailing list