Profile Images Deployment

Chris Kolosiwsky ckolos at mozilla.com
Fri Aug 8 07:16:24 PDT 2014


Ed:

That makes two of us...

C


_-=^=-_

Chris Kolosiwsky


----- Original Message -----
From: "Edwin Wong" <edwong at mozilla.com>
To: "Chris Karlof" <ckarlof at mozilla.com>
Cc: "Sean McArthur" <smcarthur at mozilla.com>, "Chris Kolosiwsky" <ckolos at mozilla.com>, "dcoates at mozilla.com Coates" <dcoates at mozilla.com>, dev-fxacct at mozilla.org, "Benson Wong" <bwong at mozilla.com>
Sent: Thursday, August 7, 2014 10:29:45 PM
Subject: Re: Profile Images Deployment

I’d love to know when we planning on going ‘Live in Prod’ with this feature.  Karl is taking the profile server testing and Peter the content-server UI side.

-edwin



On Aug 7, 2014, at 7:10 PM, Chris Karlof <ckarlof at mozilla.com> wrote:

> Thanks Sean! 
> 
> Has this plan been broken down into tracking issues on the dev and ops side? I currently don't have a lot of visibility into how this is progressing, timetables, potential blockers, concerns, etc.
> 
> The front-end is just about ready at this point.
> 
> If there is anything I can do to help out, please let me know.
> 
> One question, how does a client/user learn when her profile image is ready to use?
> 
> -chris
> 
> 
> On Aug 4, 2014, at 3:45 PM, Sean McArthur wrote:
> 
>> Ok, so, the Profile service is going from a dumb proxy of emails to an image crunching and serving monster. Let's ready the monster cage!
>> 
>> # Overview
>> 
>> 1. FxA UI will POST some binary image data at an endpoint.
>> 2. The webserver will generate a unique id, and pipe that payload straight into a private S3 bucket, with the name of the id. Once the upload to S3 is complete, the webserver will add the id to an SQS instance.
>> 3. We will have an SQS instance.
>> 4. A different worker server will pestering SQS for new images to crunch.
>> 5. When it finds new messages in the queue, it will download them from the private S3 bucket, resize them if necessary, and reencode them to ensure they're plain images.
>> 6. The worker will upload the crunched image to a public S3 bucket.
>> 7. That image should now be reachable from https://firefoxusercontent.domain/a/id.
>> 
>> # Details
>> 
>> - S3 - I'm assuming we'll need 2 buckets, 1 private for dumping "unsafe" images, and a 2nd public that can server image files through a URL.
>> 
>> - SQS - Will need 1 so that webheads can alert workers when there are new images to crunch.
>> 
>> - Servers - 2 kinds of servers: the web server runs in `bin/server.js`, and the worker runs in `bin/images.js`. They don't need direct access to each other; they will communicate through SQS.
>> 
>> - Image Proxy - The thing to make the public S3 bucket have saneish URLs.
>> 
>> - Domains - The web servers should be accessible from profile.account.firefox.com. The images should be a different second-level domain. It may be that we already have a domain for this, since AMO and Marketplace have been serving user images for years.
>> 
>> # ¯\_(ツ)_/¯
>> 
>> - Building this in stage/dev?
>> I imagine stage would be like build in prod, but dev tends to use dannyboxes. I'm not sure we'll have access to S3 and SQS and whatnot, so I have been writing a `local` aws-driver, which just uses a child process and IPC to crunch images, and an additional route to server them from.
>> 
>> - Testing this in stage/dev?
>> With a dannybox using a `local` driver, testing on dev won't exercise the whole stack. I guess that's what testing in stage is for?
> 




More information about the Dev-fxacct mailing list