Google and OAuth 2.0

Andrew Sutherland asutherland at
Fri Apr 25 19:31:29 UTC 2014

On 04/25/2014 02:47 PM, Joshua Cranmer 🐧 wrote:
> I don't disagree that external authorization mechanisms are 
> necessarily a bad thing. However, I think that OAuth fails to be an 
> effective mechanism:
> 1. The mechanism is trivial to internalize: you need to be able to 
> control an HTML form and http[s] calls manually, which isn't a 
> terribly hard task for many applications [e.g., Thunderbird's current 
> OAuth 2 accesses do this]. Once you internalize the authorization, you 
> still get the username and password and effectively complete access.

You receive a scoped-to-email access credential that is different from 
the user's normal site-wide Google login credentials. Compromise of the 
token is a pretty big deal given the importance of email, but it's less 
bad than compromise of the entire account.  But if it is compromised:

1) Google has a better chance of detecting the compromise and only 
cutting off that specific token, rather than trying to lock-out the 
entire account/etc.  (Noting that they can already do this with 
application-specific passwords.  Those are just more dangerous.  And 
require even more manual intervention on the part of the user.)

2) Compromise of this token does not require the user to change their 
password, it just requires them to revoke the access credential and have 
Thunderbird request a new token.

> 2. Critical authorization tokens are passed in plaintext on URIs. So 
> if you can sniff this URI somehow, even without full control over the 
> authorization, you can still do large worlds of hurt. And URIs are 
> generally handled far more carelessly than, say, POST data.

The URIs travel over https.  If you can sniff the URI over the network 
then you can sniff everything else.

If you're talking about Thunderbird being careless with URIs, then that 
sounds like a Thunderbird problem, not an oauth problem.

> 3. There's no standardized way to access the authorization mechanism. 
> In GSSAPI, to request access an IMAP access token, you load a platform 
> library [typically], and you call an RFC-standardized function with 
> RFC-standardized parameters that indicate that you're looking for an 
> IMAP access token. In contrast, the SASL OAUTH step just has the 
> client access with a Bearer token it got from... who knows? The 
> server, pages, request parameters, etc., are never hinted at in the 
> authentication step, so the only way you can know these values are to 
> manually hard code them for every site.

As documented at, 
Google exposes their openid data at

openid-configuration is documented in the IANA Well-Known URIs registry:

Now, there is the big question of how to generically end up at the domain; that webpage says you should hardcode it.  I 
agree that that hard-coding is not great.  It would be better if that 
file was also accessible at (404).  There is a from the webfinger experiments 
from a few years back, so it seems possible to request that such a file 
be added.

There is also the question of how we know what scope to use. tells us to use the 
scope "".  This is not an inherently obvious 
transformation for an automated mechanism.  Ideally this could also be 
put in something under

So I would say:

1) It makes sense to work with Google to avoid anyone having to hardcode 
Google-specific paths in their app.  They provide SRV records for, so they clearly are already trying to do this.

2) Gmail is already a special-case inside Thunderbird in many places, 
oauth2 is not a Google-specific invention, and Thunderbird already has 
an ISP database for this exact purpose (recently extended to provide 
login URLs!


More information about the tb-planning mailing list