Object.prototype.* writable?

Kyle Simpson getify at gmail.com
Sat May 7 13:17:01 PDT 2011

>> Again, a "smart library" can only do that if it's guaranteed to be the 
>> first code to run on the page. If not (which is usually the case), then 
>> all bets are off, unless the language offers some protections.
> All bets are probably still off. The malicious code that's first can load 
> the latter virtuous code as data using cross-origin XHR, or, if the script 
> isn't served with an "Access-Control-Allow-Origin: *", via server-side 
> proxying. Then the malicious code can rewrite the virtuous code as it 
> wishes before evaling it.

My first reaction to this assertion is to say: "so?" It's a rather moot 
argument to suggest that code which can be altered before it's run isn't 
trustable... of course it isn't. The malicious coder in that scenario 
wouldn't even need to go to the trouble of overwriting Object.prototype.* in 
the frame, he could just remove the offending if-statement altogether. In 
fact, he wouldn't even need to modify my code at all, he could just serve 
his own copy of the .js file. .... ... .......

So what are you suggesting? That regardless of the JS engine, no page's JS 
functionality is actually reliable, if any of the page's JS resource authors 
are dumb and don't configure CORS headers correctly,  because any malicious 
script (if it's first on the page) can completely hijack another part of the 
page? Yup, I agree.

This is a rabbit trail that I'm weary to go down, but I'll just indulge it 
for one quick moment.... if you are enabling CORS on your server, and not 
protecting your JavaScript code, you're asking for someone to exploit your 
code in some XSS type of attack. The whole original purpose of SOP (same 
origin policy) was to prevent (or cut down significantly) on such things, 
especially as they relate to being able to trick the browser into sending 
along cookies/sessions to locations that allow a server to act as 
man-in-the-middle. If CORS basically completely eliminates any of the 
protections that SOP gave us, then CORS is a failed system.

But CORS is only failed if you do it wrong. I suspect that's part of the 
reason CORS is slow to wide-spread adoption (despite plenty of browser 
support, except Opera), because it's harder to get it right without throwing 
the barn door wide open. FWIW, I see most implementations of CORS only being 
on limited URL locations (sub-domains) which are purely web service/REST 
API's, not general web server roots. That's not to say that noone is doing 
it wrong, but it is to say, that them doing it wrong is irrelevant to this 
discussion, because it moots the whole premise.

All this is a moot discussion though, because malicious take-over's of a 
page are nothing but an exotic edge case, and only enabled if people "do it 
wrong". The original request stemmed NOT from the malicious hacker scenario, 
nor from a page "doing it wrong" (per se), but from the "oops, some other 
piece of dumb code earlier on the page accidentally screwed up and collided 
with something I need to be inviolate."

> I've been at this for a while, as has Crock. I doubt there's any realistic 
> scenario where code loaded later into an already corrupted frame can 
> usefully defend its integrity. If you know of a way to defend against this 
> rewriting attack, please explain it. Thanks.

Off the top of my head, it would seem at first glance that creating a new 
iframe for yourself might be the only such way (that is, of course, if you 
even *are* yourself, and haven't been transparently modified or replaced --  
see above).

I'm sure both of you are way more experienced at this than me (after my 12 
year web dev career so far). But I think you're trying to derail the narrow 
spirit of my original question by deflecting to much bigger questions. The 
appropriate forum for that type of discussion was when CORS was being 
conceived and brought about. As people love to say on this list: "that ship 
has sailed".

None of this exotic "what-if" scenario indulgence invalidates my original 
request, that a clearly known bad-practice (changing *some*, not all, 
particular behaviors of natives) leads to code that is less than reliable, 
and can we make it a little less so by having the engine protect certain key 

And btw, contrary to some people on this list who seem to operate almost 
exclusively on theoretical principle, "security through deterrence" (not the 
same as "obscurity") is a long-established and perfectly valid approach. No 
computer system (SSL included) is completely immune to attack... we live 
with somewhat less than ideal theoretical utopia because we construct 
systems which are "pretty good" at deterrence, and with that we sleep 
peacefully at night.

What I'm suggesting should be viewed as another peg in the system of 
deterrence, and nothing more.


More information about the es-discuss mailing list