Operator overloading revisited

Mark S. Miller erights at google.com
Tue Jun 30 13:04:12 PDT 2009


On Mon, Jun 29, 2009 at 10:26 PM, Brendan Eich <brendan at mozilla.com> wrote:

> On Jun 29, 2009, at 11:55 AM, Mark S. Miller wrote:
>
>
>  Let's try a reductio ad absurdum.
>
>
> This doesn't settle anything since there is no "for all" claim in Chambers,
> et al.'s words cited above. You can't reduce to an absurdity something that
> according to its own proponents does not apply in all cases.
>

A fine counterargument if you can distinguish which cases it does and
doesn't apply to. The syntactic distinction "operators" seems silly. I'm
glad you seem to include Math.atan2(), as that gets us beyond the silly
syntactic distinction.


>
> It seems to me you are the one making a universal claim, for OOP methods as
> the one true way, which could be argued to lead to absurd (or just awkward)
> conclusions such as double dispatch (hard for V8, easy for TraceMonkey :-P)
> and "reverse+", "reverse-", etc. operator-method bloat.
>
>
Contradicted by the next text of mine you quote:

>
> It seems to be that the argument you're making applies just as well to
>
>     w.foo(x)
>
> or
>
>    bar(y,z)
>
> I include the bar(y,z) as an example of case against multimethods. Since
you include Math.atan2() in the case for multimethods, we can focus on it,
and ignore either operators or method calls.


>
> In conventional oo reasoning, the first expression is making a request to
> the object bound to w. The second is making a request to function bound to
> bar. In both cases, the requested object is responsible for deciding what
> the next step is in responding to that request. The veracity of the result
> is according to the responsible object. If there's a bug in this result, the
> responsible object may not itself be the one that is buggy. However, the
> blame chain starts there.
>
>
> Sure, but reduction to an aburdity doesn't tell us which of the two
> diffuses responsibility or why that's bad, or if it's bad how the cost
> trumps other concerns.
>
> Functions have their place, so does OOP. But OOP does not mean functions
> are obsolete -- consider Math.atan2, a function (not a method -- ignore the
> Math poor man's namespace!) taking two parameters x and y.
>

Good. Hereafter, this example is simply atan2(x,y).


> Either x or y could be blame-worthy for some call (signed zero has a
> use-case with atan2, the numerics hackers tell me), but there's no single
> receiver responsible for computing the result given the other parameter as
> an argument. atan2 is a function, arguably similar to a division operator.
>

No. This is the crux. atan2 is a variable bound to a value in some lexical
scope. In different scopes it may be bound to different values. In the
expression atan2(x,y), the responsible party is the atan2 function. It
decides the next step in processing this request, and decides in what way it
will subcontract parts of the job to its x and y arguments.


>
> "Blame chain" is a good phrase, and of course we've been tracking PLT
> Scheme Contracts for a while and are still keeping them in mind for some
> harmonious future edition. But if responsibility is about blame, there are
> many ways to skin that cat. And there are other costs competing with the
> cost of weak or wrongful blame.
>
> Since JS has functions, not only methods, we have benefits countervailing
> the cost of having to blame one of two or more actual parameters when
> analyzing for responsibility. bar(y, z) is tenable in JS interface design,
> depending on particulars, sometimes competitive with w.foo(x), occasionally
> clearly better.
>

I was not arguing that w.foo(x) is always better than bar(y,z). Rather, I
was saying that currently neither violate locality of responsibility. In
these two expressions, w and bar are the respective responsible parties.


>
>
> What if w doesn't respond to a foo request? Currently our choices are
>
>    1) rewrite our first expression so that it no longer asks w to foo.
>    2) modify w or w.[[Prototype]] or something so that w knows how to foo.
>        2a) Get the provider/author of w to do this and release a new
> version
>        2b) Fork or rewrite the source for w
>        2c) Monkey patch w or w.[[Prototype]] from new module M2
>    3) Wrap the original w object with a foo-responding decorator
>        3a) Conventional decoration, where other requests are manually
> forwarded
>        3b) In JS, one can decorate by prototype inheritance
>        3c) If we ever have catchalls, one might be able to use them for
> decoration
>
> All the options above except for #2c maintain the locality of reponsibility
> that makes objects so pleasant to reason about.
>
>
> Sorry, but you are assuming your conclusion -- the locality of
> responsibility and its pleasantness in all cases, over against other costs,
> is precisely the issue.
>
> Experience on the web includes enough cases where third parties have to
> mate two abstractions in unforeseen ways. We seem to agree on that much. But
> your 1-3 do not prove themselves superior to 4 (multimethods, cited below)
> without some argument that addresses all the costs, not just blame-related
> costs.
>

Were we to adopt multimethods, where atan2 somehow gets enhanced by all
imported modules defining overloadings of atan2 in that scope, what is the
value of the atan2 variable in that scope? If it is a function composed of
all lexical contributors, what happens when this function is passed as a
value and then used in a different scope?

What about a global atan2 (as would seem to be suggested by the Math.atan2
example you started with)? Would modules dynamically loaded into the same
global environment be able to dynamically enhance the value already bound to
the atan2 variable, changing the behavior of the atan2 value that may have
already been passed to other scopes? Presumably this would fail if atan2 is
frozen, as it should, since such mutation monkey patches the atan2 value.

Or would it bind a new value to the atan2 variable, where that new value has
the new behavior? Presumably this would fail if atan2 is const, as it
should.

>
>
> 2c has come to be regarded as bad practice. I suggest that one of the main
> reasons why is that it destroys this locality. w shouldn't be held
> responsible if it can't be responsible. One of the great things about
> Object.freeze is that w's provider can prevent 2c on w.
>
>
> Assuming the conclusion ("One of the great things..."), I think -- although
> the argument seems incomplete ("one of", and no downside analysis).
>

Please reread the quote above. I agree that the argument I present here for
Object.freeze is incomplete. Since the purpose of this thread is not to
decide whether Object.freeze is or is not a good idea, I did not feel it
necessary to enumerate all the arguments on either side of this question.
But neither am I assuming my conclusion. I am simply enumerating one of
these arguments -- that Object.freeze is good because it enables w's
provider be responsible for the behavior for which we'd like to hold it
responsible. You may disagree with this argument, but I don't see how it
assumes its conclusion.


>
> Many JS users have already given voice (see ajaxian.com) to fear that
> freeze will be overused, making JS brittle and hard to extend, requiring
> tedious wrapping and delegating.
>

I couldn't quickly find these arguments. Could you provide some links?
Thanks.


> This is a variation on Java's overused final keyword.
>

Vastly underused IMO, due to syntactic overhead. In the early E language, it
was easier to declare a variable let-like than const-like. One of the best
changes we made was to reverse this. E variables are now const-like unless
stated otherwise. Everyone, whether concerned with security or not, has
found this to be a pleasant change to the language. When reading code and
wondering about a variable, once you see that it's declared const-like, you
can avoid many follow-on questions.


> Object.freeze, like any tool, can be misused. There are good use-cases for
> freeze, and I bet a few bad ones,
>

Of course. Good things are good and bad things are bad ;).


> but in any case with freeze in the language, monkey-patching is even harder
> to pull off over time.
>

As goto and pointer arithmetic became harder in earlier language evolution.
I sure hope so. Especially for involuntary monkey patching, which could be
seen as an attack from the victim's perspective.

>
> That leaves forking or wrapping in practice. Why forking or wrapping should
> always win based on responsibility considerations, if one could use a Dylan
> or Cecil multimethod facility, is not at all clear. Forking has obvious
> maintenance and social costs. Wrapping has runtime and space costs as well
> as some lesser (but not always trivial) maintenance cost.
>

Forking is indeed quite costly and should be avoided when there's a less bad
alternative. Wrapping (or decorating) may or may not be a good pattern to
use depending on the particulars.


So why shouldn't we consider, just for the sake of argument, dispatch on the
> types of arguments to one of several functions bound to a given operator, in
> spite of the drawback (or I would argue, sometimes, because of the benefit)
> of the responsibility being spread among arguments -- just as it is for
> Math.atan2's x and y parameters?
>

We are considering it. That's what this discussion is about. But we need to
separate two kinds of considerations:

1) In a language in which all these abstraction mechanisms are present and
used, which one should we use for a given problem? This will often be the
kind of tradeoff you are explaining, where sometimes monkey patching or
multimethods win. Indeed, the Cajita runtime library as implemented on ES3
internally monkey patches some of the ES3 primordials. Were I programming in
Common Lisp and trying to make use of preexisting ibraries, I'm sure I'd use
multimethods and packages. OTOH, Common Lisp's bloat and incoherence is one
of the reasons I've always avoided it.

2) In a language with an adequate set of good abstraction mechanisms and
already burdened by a plethora of bad or broken abstraction mechanisms, when
is it worth adding a new abstraction mechanism?

The bar on #2 must be much higher than #1. If the language in the absence of
the proposed abstraction mechanism already has some pleasant formal
properties that the addition of the new mechanism would destroy, then the
loss of this property must be added as a cost in assessing #2. But not #1,
since that battle is already lost.

>
> The Cecil-style operator overloading argument, extended to this example,
> would enable a fourth option
>
>    4) Allow module M2 to say how w should respond to foo.
>
> I grant that #4 is not as bad as #2c. But does anyone care to argue that #4
> would be a good thing in general?
>
>
> Sure, and I've given references to some of the papers. There are others,
> about Dylan as well as Cecil. Here's a talk that considers "design patterns"
> (double dispatch with "reverse-foo" operators are pattern-y) to be
> workarounds for bugs in the programming language at hand:
>
> http://www.norvig.com/design-patterns/
>
> Complete with multimethods!
>

Well, Peter's my boss, so I concede the case. Let's add multimethods. Just
kidding!

Actually, thanks for the link. I hadn't seen it before, and it is a nice
presentation.

>
> If not, why would #4 be ok for operators but not method or function calls?
>
>
> Straw man, happy to knock it down.
>

Huh?


> I never said your item 4 wasn't ok for methods and function calls --
> operators are the syntactic special form here, the generalization is either
> to methods (double-dispatched) or function (generic functions, operator
> multimethods).
>
> The ES4 proposal (
> http://wiki.ecmascript.org/doku.php?id=proposals:generic_functions) for
> generic functions/methods indeed subsumed the operators proposal that
> preceded it. I'm not trying to revive it wholesale, please rest assured. But
> I am objecting to circular (or at best incomplete) arguments for dyadic
> operators via double dispatch of single-receiver methods.
>
> As I've written before, if double dispatch is the best we can agree on for
> Harmony, then I'm in. But we should be able to agree on the drawbacks as
> well as the benefits, and not dismiss the former or oversell the latter.
>

I certainly agree that it's good to have a more complete enumeration of the
pros and cons. I think we largely agree on many of these but disagree on the
weights. JavaScript today has adequate support for modularity and
abstraction, and is mostly understandable. ES5/strict even more so. Common
Lisp and PL/1 never were understandable. And Common Lisp is highly
immodular. I agree that ES6 should grow some compared to ES5, but I highly
value retaining ES5/strict's modularity, simplicity, understandability.

-- 
   Cheers,
   --MarkM
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.mozilla.org/pipermail/es-discuss/attachments/20090630/a560a802/attachment-0001.html>


More information about the es-discuss mailing list