Operator overloading revisited
Allen.Wirfs-Brock at microsoft.com
Tue Jun 30 11:29:20 PDT 2009
>>From: Mark S. Miller
>What if w doesn't respond to a foo request? Currently our choices are
>> 2c) Monkey patch w or w.[[Prototype]] from new module M2
>>All the options above except for #2c maintain the locality of reponsibility that makes objects so pleasant to reason about. 2c has come to be regarded as bad practice. I suggest that one of the main reasons why is that it destroys this locality. w shouldn't be held responsible if it can't be responsible. One of the great things about Object.freeze is that w's provider can prevent 2c on w.
>From: Brendan Eich
>Many JS users have already given voice (see ajaxian.com) to fear that freeze will be overused, making JS brittle and hard to extend, requiring tedious wrapping and delegating. This is a variation on Java's overused final keyword. Object.freeze, like any tool, can be misused. There are good use-cases for freeze, and I bet a few bad ones, but in any case with freeze in the language, monkey-patching is even harder to pull off over time.
I believe that the original experience base with this style of monkey patching was in Smalltalk where it provide to be both highly useful and also gained a reputation as a bad practice. We need to look at both sides of that equation.
Smalltalk programming was largely about reuse and monkey patching was highly useful because when combining (reusing) independently created implementation "modules" into a larger application you often need to pragmatically do some tweaking around the edges of the modules in order to make them fit together. Even if you have access to the source of each module (which you typically did in Smalltalk) it was still better (more maintainable) to package the necessary integration code as part of the consuming application module rather than creating modified versions of the pre-existing modules that incorporates the integration code.
The "bad practice" rap came about because in the original Smalltalk environments multiple conflicting occurrences of such monkey patching applied on the same classes would silently stomp on each other and you would get some inconsistent combination of behaviors that were difficult to test for and hence caused downstream errors. Patches that conflict with each other are fundamentally incompatible and require human intervention to resolve. Digitalk's Team/V solved this problem by treating such monkey patching as being declarative in nature and detecting any conflicting monkey patches at integration time (load time, build time, whatever..). Such a conflict were treated as a "hard error" (comparable to a syntax error). With this mechanism developers could monkey patch to their hearts content, "mashing up" all sorts of independently created code but as soon as the patches stated stepping on each other they found out and could fix the problem.
I've often remarked that to me, the web feels like one big persistent Smalltalk virtual image. It's being continually extended and modified but it can never be recreated from scratch or redesigned as a whole. In such an environment you have to evolve in place and sometimes that means dynamically patching what is already there in order to make it work with something new or to use it for some new purpose. Web technologies need to be dynamic, flexible, and forgiving in order to accommodate this sort of evolution. We can provide mechanisms to help developers detect and deal with common failure scenarios but if the web doesn't stay flexible enough to allow for the possibility of such failures than it won't be able to evolve and will eventually be replaced by something that can
More information about the es-discuss