block-lambda revival

Claus Reinke claus.reinke at talk21.com
Wed Jun 29 03:25:24 PDT 2011


>> I guess the summary would be: I don't think Tennent's principle
>> of abstraction can be meaningfully applied to control flow when
>> your abstraction can survive the control flow that created it.
>
> The success of such an abstraction for building control flow
> abstractions in Smalltalk is a good demonstration that, in
> practice, your conclusion is not universally true.
>
> For a view of the early origins of this see  "Building Control Structures
> in the Smalltalk-80 System" By L. Peter Deutsch in the PDF at
> http://www.em.net/portfolio/2010/08/smalltalk_in_byte_magazine.html
> starting at page 125 of the PDF. But be warned that some of this goes
> beyond what the use of block-lamda's and is not totally relevant. Early
> Smalltalk blocks weren't true closures but over time they became so.

Nice, thanks!-)

However, we should be aware that ideas evolve over time. It is
great that this list has contributors with direct experience in so
many major languages. With the general open-mindedness here,
this bodes well for JS evolution, even in the face of design by
committee and massive deployed code base.

But experience with a major language shapes mindsets - what
one language's programmers expect from a lambda differs from
what those of other languages expect. Please take the following
overview with a pinch of smileys - though I hope it isn't totally
misrepresenting the differences. Dave has already raised the
closely related question of macros, so I'll focus on macros,
blocks, and control structure abstractions:

    Lispers had it all first (just kidding, of course, you could do it
    before Lisp, by tweaking the rules of your universe so that
    Lispers would evolve in it;-), but they did have an AST instead
    of a syntax, wouldn't leave home without macros, and were
    expected to code around little dangers like side-effects, dynamic
    scope, and scope-breaking macros. In those optimistic times,
    there was hardware, not just IDEs, specifically for the language,
    and at least one Lisp environment had a Do-What-I-Mean tool.

    Schemers thought side-effects and macros necessary, but
    uncontrolled scope manipulation dangerous, so they insisted
    on lexical scope and hygienic macros and found that those
    constraints required some thought, but were not really limiting.

    Lispers and Schemers have adapted to more recent fads mostly
    by demonstrating that each could be done in their existing systems,
    usually with very little effort, if one ignores the syntax. This 
includes
    control structures, (meta-)object, aspect, and other systems. They also
    enjoy porting their systems to any new platform that comes along.

    Smalltalkers sometimes do not distinguish between language,
    program, and IDE, or between programming and reflection, but
    since their model of computation goes back to communicating
    computers (each with code and data, and its own interpretation
    of incoming messages), objects, messages and blocks of operations
    are natural building blocks to them.

    So everything is decomposed into objects, and can be redefined
    while the system is running, but blocks are not decomposed into
    statements - though message cascading (aka method chaining)
    could be used to that effect, that is separate from blocks.

    Smalltalkers do not adapt to fads, they just port their system
    to anything new that comes along, or use something else if
    -and only as long as- they must.

    Haskellers can go so far without macros that they sometimes
    forget these are useful, ban side-effects to the communication
    boundaries of their code, and treat statements as just another
    form of expression to be computed and manipulated. They
    broke down blocks into statements composed by re-definable
    semicolon long ago, and have been building finer-grained
    control structure libraries ever since.

    They define control structures, re-interpret code over different
    kinds of effect, combine and control effects, and argue about
    what kinds of effect what kinds of code should be given access
    to. They think that being able to reason about code is usually
    more important than it being able to modify itself at will.

    They do have several levels of macro processing, including
    syntax-based (Template Haskell) and string-based (QuasiQuoting)
    compile-time meta-programming, combined with Haskell-in-
    Haskell parsers as well as API access to compiler and runtime
    internals, not to forget rebindable syntax. None of which has
    reached the level of standards yet.

    Haskellers are only slowly adapting to the thought that
    there could be fads after Haskell.

As you can see, my own mindset is also influenced by my
experience, though I am aware that there have been gains
and losses in this evolution (and I've omitted important
languages, to focus on the lambda-related aspects). Also,
Javascript can not equally easily adopt ideas from all these
languages - some are closer than others. But *all* of these
languages support "building control flow abstractions",
each in its own way. Not all use macros, not all use blocks.

It seems undisputed here that it is important to separate
reflection from normal operation, and I think that macros
(fairly free functions from syntax to syntax, only limited by
respecting scope) should be separate from more restrained
"normal-mode" functions (functions between semantically
meaningful syntactic categories, to use the phrase from
Tennent's principle of abstraction).

In languages with non-trivial syntax (like Javascript), a macro
system also has to offer more than just functions from ASTs
to ASTs. Macros should not be used where normal functions
are sufficient, such as control-structure abstraction libraries,
but macros as programmable syntactic sugar could make
such abstractions more readable (and reduce the pressure
to standardize convenient language constructs like classes).

Also, I think that the statement block is too coarse a building
block for control-structure abstractions, however nice it looks
syntactically, for selected examples (even that advantage can
be nullified by suitable generic control-structure sugar, as
shown by F#'s computation expressions or Haskell's do-notation).

Take just one code example that was mentioned here earlier -
search for 'eval' on this page:

http://www.cs.rit.edu/~ats/projects/jsm/paper.xml

How would block lambdas help to make monadic interpreter
code like that more readable, or -since monads are a central
control-structure abstraction - how would block lambdas achieve
the same flexibility by other means?

Block-lambdas have interesting uses, so if those with experience
think their unusal aspects can be handled, I won't argue against
that, but I don't see how block-lambdas could replace either of
macros or better functions, both of which I consider important.
So I do have to argue against dropping work on better functions
(in favour of block-lambdas).

Claus

 



More information about the es-discuss mailing list