ES4 Security
Steven Mascaro
subs at voracity.org
Sun May 18 07:50:47 PDT 2008
On Sun, May 18, 2008 at 7:54 PM, Brendan Eich <brendan at mozilla.org> wrote:
Brendan wrote:
> I think you kept it too short. :-/
I've been accused of being verbose, so I tried to keep my *opening*
statement concise. It was meant as an invitation to discussion, not a
final statement.
> "Cross-site" exploits happen on the server side too. They're possible
> in proxies and other kinds of gateways. They arise when data
> originating from different trust domains mix in ways that lose track
> of each datum's trust label (ignore policy questions, including the
> folly of putting the user in the loop). The mixing involves control
> flow, so the problem is not solvable by data-labeling alone.
I'm confused. Aren't these man-in-the-middle attacks? Yes, there are 3
parties, but the structure and solution (usually
encryption/signatures/hash checks) is different.
>> The solution for browsers is simple: do not *automatically*
>> transmit private information (usually cookies) to 3rd parties in a
>> transaction.
>
> This is so vague it's hard to respond to usefully, but notice that
> (in Parkerian Hexad terms) you're talking about Confidentiality here.
>
> The parenthetical non-vague bit about cookies does not help, because
> cookies are only one asset threatened by XSS. Browsers have cookie
> controls already, we're working on improvements to them and their
> defaults, but XSS goes way beyond cookies.
Perhaps, but they make for a particularly clear example of the XSS problem.
For example, suppose that it were possible to retrieve the text of any
<script src="..."></script> element using '.textContent' from
javascript, regardless of origin. You'll agree that this is
unthinkable today. But I assume you'll also agree that there is no
security problem in doing this if no cookies (or other private data)
are sent in the initial request to retrieve the script page?
The same issues affect XMLHttpRequest. The solution adopted by 'AJAX'
developers is to ask their own server for the page, which is
equivalent to asking for the page without cookies. The recently
suggested cross-site XMLHttpRequest extensions still do not solve the
problem completely (the original page sends cookies to the 3rd party
server, which may not be what either the original page or the user
wants).
I'd imagine a solution for cookies would look something as follows: 1)
By default, don't send cookies across domains. 2) If a page requests
that cookies be sent for an embedded request (XMLHttpRequest,
<{script,img,iframe} src="" />, <link href="">, etc.), the browser
should send them to the 3rd party via an alternate header (i.e.
instead of 'Cookie:', use 'Untrusted-Cookie:') and specify the source
(which is normally in the referer anyway).
If there are non-cookie examples of XSS, please point me to them
(setting aside equivalents like DOM storage, but also the :visited
example from below).
>> Once this problem is solved, ES4 *does* *not* need RO/DD/IH for
>> security. (IH=information hiding.)
>
> Now you've changed the subject to Integrity.
No, I am talking about Integrity, Confidentiality and more (like
Authenticity). Which is why I used the "fluffy" term security. If you
can snoop data, you can breach Confidentiality. If you can rewrite
data, you can breach Integrity. If you can auto submit data with
cookie credentials, you can breach Authenticity. From what I can tell,
you can prevent all three by either having lax 'cookie' policies and a
restricted language, or restricted cookie policies and lax language. I
obviously prefer the latter.
>> Note, this post is *only* about security (and privacy).
>
> We are not out to solve "security" or any such fluffy problem-name in
> ES4. Anyone claiming to deliver "security" solely by means of a
> "secure programming language" is selling you a bridge. See http://
> lambda-the-ultimate.org/node/2773 for a recent LtU thread (and cue
> David Teller and Mark Miller ;-).
I fully agree that 'secure programming languages' are not enough. I'd
go further, though, and suggest that 'security' and 'programming
languages' should be treated separately. And I apologise for using
fluffy terms, but if there were better terms available, I would use
them.
>> It is not about whether RO/DD/IH can make development/maintenance
>> easier.
>
> The main issue in ES4 is not "development/maintenance", it's being
> able to make sane type judgments (static or dynamic, doesn't matter),
> at all.
So making application development and maintenance as simple as
possible is *not* your Number 1 Priority? Who, in that case, are you
developing ES4 for? Theoreticians?
> A secondary issue is Integrity as an information security property.
> Integrity alone doesn't really "solve" whole problems on the real-
> world level of "prevent entire class of security exploit (XSS)" or
> "make a usable and 'safe' yet powerful browser-based programming
> language". But Integrity is an end-to-end property that you can build
> proofs and helpful automations on top of, and without it you're too
> often pretty much screwed.
>
> I implemented a dynamic data tainting security model (opt-in) for
> Netscape 3. Helpful testers actually beat on it, hard. Its goal of
> Confidentiality was frustrated by (a) lack of static analysis to help
> "untaint the pc" (tainting the pc is required to deal with Denning's
> "implicit flows" -- see above about data labeling being
> insufficient); (b) lack of Integrity to make such analysis practical,
> if not feasible in the first place.
After reading up on implicit flows, I still have no idea what
'tainting the pc' means. In any event, it would seem to me that
data-tainting is impossible without integrity. And I'm arguing that
tight cookie policies would ensure integrity.
>
> Ignoring ES4, browsers have struggled with mutable (and shadowable,
> for initialisers per ES3!) Object and Array bindings, and mutability
> in general. Check out the open source bug databases at
> bugzilla.mozilla.org and webkit.org (both, one can cite bugs in
> either; and closed-source browsers' bug databases, if they're worth
> anything, should have similar bugs). Just one example:
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=376957
>
> Jesse Ruderman's comment 0 is worth reading in full. See how browser
> have to engineer defense-in-depth (so should everyone; I'm not
> whining here).
That actually provides an excellent example of what I'm talking about.
Jesse talks about a script that serves up data using JSON at some URL
(e.g. https://mail.victimsite.com/address-book.json). A 3rd party site
can then use <script
src="https://mail.victimsite.com/address-book.json"></script> to snoop
on that data using prior redefinitions of Object/Array, prototype
getters/setters, etc. (In most cases, this can be avoided anyway by
victimsite.com checking the referer.)
But if no cookies are sent, there is no problem. For example, suppose
evilsite.com did the following instead:
<script
src="evilsite.com?graburl.cgi?loc=https://mail.victimsite.com/address-book.json"></script>
evilsite.com can do that today, and will forever be able to do that.
There is no problem, though, because the 'graburl.cgi' script can't
send the user's cookies to victimsite.com. I don't understand why
there is any confusion about this.
>
> In the real world this means considering the likelihood of web app
> developers failing to authenticate carefully, or configure and check
> MIME types. And of course, even if web app devs were perfect, there'd
> still be browser bugs, mashup novelties, wireless network IP
> addressing and DNS threats, etc.
You can't solve wireless, DNS or other network-level attacks in the
browser. Browser bugs are obviously a problem, but won't be any more
or less harmful with what I'm proposing.
In fact, if you really do like the defense-in-depth idea (and I'm
loathe to suggest this), you could implement the cookie restrictions
I've argued for PLUS the language restrictions.
> Here's another "privacy" issue, not solved in any browser I know of,
> dictated by web standards and user expectations:
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=147777
>
> In the bug, David Baron even explores "cross-copying" (padding
> execution times along alternate paths to close timing channels). This
> is still a research problem.
>
> And what's made this bug less severe, besides its ubiquity in all
> popular browsers, is the fact that remote tracking of browser
> "visited" history is unfortunately "easy" using a number of
> techniques, ignoring CSS (Cascading Style Sheets, not XSS).
>
> Still, I would like to fix bug 147777. I have some ideas, which
> involve hybrid information flow propagating through CSS layout, but
> they're vague and challenging -- researchy, in a word.
Yes, that truly is a difficult privacy bug to fix. I use a similar
trick to discover what fonts a user has installed on their system. The
new 'offline' flag could probably be used in questionable ways. There
are also sites like clicktale.com. None of them can be fixed without
complicated changes to javascript and probably other parts of the
browser. (Though, the :visited case is the only one of real concern
from this particular list.)
To be honest, I've never found cross-domain :visited indicators very
useful, so I'd be happy to turn them off, but perhaps others might
want them.
> "Security" is a big topic, not something to simplify with too few
> words. You cannot reduce a difficult argument to a short-hand formula
> that "proves" ES4 should let Object or Array be overridden (or not).
I believe you can't 'prove' much that's interesting, anyway... (again,
that's why I don't believe in the utility of code verification).
> Yet it seems to me you've made such a reduction: "Confidentiality,
> not Integrity, is the property to enforce against XSS threats;
> therefore Integrity does not matter for 'security'" (if I may
> paraphrase your brief message with too few -- even fewer -- words
> myself).
I hope I've made it clear that's not what I mean.
> Kris Zyp tried this kind of stunt-argument with the "AOP" would-be
> root password on this list, and I hope I put a stake through the
> heart of that three-letter-acronym vampire. (I recently asked Mitch
> Wand about AOP, which he'd spoken about at ICFP 2003 and researched a
> bit, in part to tease him for touching such an overhyped topic; he
> replied wittily: "sometimes someone coughs and you start sneezing." ;-)
>
In the message I found, you mention logging (and other post-hoc
instrumentation) as a potential use for AOP. Being from a simulation
background, I'd find statistics a compelling use. Being a programmer,
I'd find debugging a compelling use. Being a hacker (not cracker!),
I'd find modifying other people's code another compelling use.
(Not that I'm a fan of buzzwords --- or jargon, for that matter.)
> At the level of "es4-discuss", we can keep arguing and try to come to
> a deeper understanding; I'm game. But the idea that Confidentiality
> is enough, Integrity be damned, requires more than what you've
> written to make a convincing argument. In info-sec theory and
> practice, Confidentiality depends and builds on Integrity.
I am up for that. In a general sense, I would like to get to a point
where I can agree with the direction of ES4, even if it's with a heavy
heart. I'm not there yet.
More information about the Es4-discuss
mailing list