Operator overloading revisited
brendan at mozilla.com
Mon Jun 29 22:26:34 PDT 2009
On Jun 29, 2009, at 11:55 AM, Mark S. Miller wrote:
> On Mon, Jun 29, 2009 at 11:21 AM, Brendan Eich <brendan at mozilla.com>
> On Jun 28, 2009, at 1:24 PM, Mark S. Miller wrote:
>> I note that your symmetric suggestion avoids the problem of most
>> other symmetric overloading systems, like Cecil, of diffusion of
> "Diffusion" sounds like a problem, a bad thing, but consider (I've
> quoted this before) the use-case:
> The generalization of receiver-based dispatch to multiple dispatch
> provides a number of advantages. For example, multimethods support
> safe covariant overriding in the face of subtype polymorphism,
> providing a natural solution to the binary method problem [Bruce et
> al. 1995; Castagna 1995]. More generally, multimethods are useful
> whenever multiple class hierarchies must cooperate to implement a
> method’s functionality. For example, the code for handling an event
> in an event-based system depends on both which event occurs and
> which component is handling the event.
> Let's try a reductio ad absurdum.
This doesn't settle anything since there is no "for all" claim in
Chambers, et al.'s words cited above. You can't reduce to an absurdity
something that according to its own proponents does not apply in all
It seems to me you are the one making a universal claim, for OOP
methods as the one true way, which could be argued to lead to absurd
(or just awkward) conclusions such as double dispatch (hard for V8,
easy for TraceMonkey :-P) and "reverse+", "reverse-", etc. operator-
> It seems to be that the argument you're making applies just as well to
> In conventional oo reasoning, the first expression is making a
> request to the object bound to w. The second is making a request to
> function bound to bar. In both cases, the requested object is
> responsible for deciding what the next step is in responding to that
> request. The veracity of the result is according to the responsible
> object. If there's a bug in this result, the responsible object may
> not itself be the one that is buggy. However, the blame chain starts
Sure, but reduction to an aburdity doesn't tell us which of the two
diffuses responsibility or why that's bad, or if it's bad how the cost
trumps other concerns.
Functions have their place, so does OOP. But OOP does not mean
functions are obsolete -- consider Math.atan2, a function (not a
method -- ignore the Math poor man's namespace!) taking two parameters
x and y. Either x or y could be blame-worthy for some call (signed
zero has a use-case with atan2, the numerics hackers tell me), but
there's no single receiver responsible for computing the result given
the other parameter as an argument. atan2 is a function, arguably
similar to a division operator.
"Blame chain" is a good phrase, and of course we've been tracking PLT
Scheme Contracts for a while and are still keeping them in mind for
some harmonious future edition. But if responsibility is about blame,
there are many ways to skin that cat. And there are other costs
competing with the cost of weak or wrongful blame.
Since JS has functions, not only methods, we have benefits
countervailing the cost of having to blame one of two or more actual
parameters when analyzing for responsibility. bar(y, z) is tenable in
JS interface design, depending on particulars, sometimes competitive
with w.foo(x), occasionally clearly better.
> What if w doesn't respond to a foo request? Currently our choices are
> 1) rewrite our first expression so that it no longer asks w to foo.
> 2) modify w or w.[[Prototype]] or something so that w knows how
> to foo.
> 2a) Get the provider/author of w to do this and release a new
> 2b) Fork or rewrite the source for w
> 2c) Monkey patch w or w.[[Prototype]] from new module M2
> 3) Wrap the original w object with a foo-responding decorator
> 3a) Conventional decoration, where other requests are
> manually forwarded
> 3b) In JS, one can decorate by prototype inheritance
> 3c) If we ever have catchalls, one might be able to use them
> for decoration
> All the options above except for #2c maintain the locality of
> reponsibility that makes objects so pleasant to reason about.
Sorry, but you are assuming your conclusion -- the locality of
responsibility and its pleasantness in all cases, over against other
costs, is precisely the issue.
Experience on the web includes enough cases where third parties have
to mate two abstractions in unforeseen ways. We seem to agree on that
much. But your 1-3 do not prove themselves superior to 4
(multimethods, cited below) without some argument that addresses all
the costs, not just blame-related costs.
> 2c has come to be regarded as bad practice. I suggest that one of
> the main reasons why is that it destroys this locality. w shouldn't
> be held responsible if it can't be responsible. One of the great
> things about Object.freeze is that w's provider can prevent 2c on w.
Assuming the conclusion ("One of the great things..."), I think --
although the argument seems incomplete ("one of", and no downside
Many JS users have already given voice (see ajaxian.com) to fear that
freeze will be overused, making JS brittle and hard to extend,
requiring tedious wrapping and delegating. This is a variation on
Java's overused final keyword. Object.freeze, like any tool, can be
misused. There are good use-cases for freeze, and I bet a few bad
ones, but in any case with freeze in the language, monkey-patching is
even harder to pull off over time.
That leaves forking or wrapping in practice. Why forking or wrapping
should always win based on responsibility considerations, if one could
use a Dylan or Cecil multimethod facility, is not at all clear.
Forking has obvious maintenance and social costs. Wrapping has runtime
and space costs as well as some lesser (but not always trivial)
So why shouldn't we consider, just for the sake of argument, dispatch
on the types of arguments to one of several functions bound to a given
operator, in spite of the drawback (or I would argue, sometimes,
because of the benefit) of the responsibility being spread among
arguments -- just as it is for Math.atan2's x and y parameters?
> The Cecil-style operator overloading argument, extended to this
> example, would enable a fourth option
> 4) Allow module M2 to say how w should respond to foo.
> I grant that #4 is not as bad as #2c. But does anyone care to argue
> that #4 would be a good thing in general?
Sure, and I've given references to some of the papers. There are
others, about Dylan as well as Cecil. Here's a talk that considers
"design patterns" (double dispatch with "reverse-foo" operators are
pattern-y) to be workarounds for bugs in the programming language at
Complete with multimethods!
> If not, why would #4 be ok for operators but not method or function
Straw man, happy to knock it down. I never said your item 4 wasn't ok
for methods and function calls -- operators are the syntactic special
form here, the generalization is either to methods (double-dispatched)
or function (generic functions, operator multimethods).
The ES4 proposal (http://wiki.ecmascript.org/doku.php?id=proposals:generic_functions
) for generic functions/methods indeed subsumed the operators proposal
that preceded it. I'm not trying to revive it wholesale, please rest
assured. But I am objecting to circular (or at best incomplete)
arguments for dyadic operators via double dispatch of single-receiver
As I've written before, if double dispatch is the best we can agree on
for Harmony, then I'm in. But we should be able to agree on the
drawbacks as well as the benefits, and not dismiss the former or
oversell the latter.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the es-discuss