[E-Lang] three philosophies of syntax

zooko@zooko.com zooko@zooko.com
Tue, 19 Jun 2001 10:45:01 -0700

I've been musing about syntax, since I retain a gnawing fear that MarkM, even
though he *wants* to design a syntax which is comfortable to new immigrants
from C and Java, will not be able to nail it because of his excessively deep
and broad familiarity with programming languages.

While the resulting little-used E would still be pleasurable (perhaps even
*more* pleasurable) for me to play with, it would never grow all of the
wonderful new tools and toys that a widely adopted language like Python grows.

(I have been deeply impressed with the recent proliferation of useful new
Python tools, like a static code analyzer, standard documentation maintenance,
improved integration with Java and with C++, an improved code coverage tool,
useful exception diagnostics that tell you the values of the relevant local
variables in every stack frame...  Note that two of these examples are due to
Ping Yee.)

I was thinking about this (in the shower, as it happens, where for some reason
I frequently find inspiration), and I considered the question of which language
features get their own first-class syntax (e.g. their own tokens, their own
special whitespace, etc.) and which get expressed in terms of syntax that they
share with other features.  I thought that there were three philosophies:

1.  The most commonly used language features get their own syntax (whether they
are abstract features or not).  This could be called "the Perl philosophy of

2.  The most commonly used of the abstractions get their own syntax.  This
could be called "the Python philosophy of syntax".

3.  The most powerful and deepest of the abstractions get their own syntax.
This could be called "the Lisp philosophy of syntax".

[[[ I carefully avoided using the word "simple" in that enumeration, because to
 a newbie, the most common abstractions are the "simplest", and to a guru, the
 most unifying and orthogonal abstractions are the "simplest".  This semantic
 collision with the word "simple" (and the word "complex") may explain a lot of
 misunderstanding on this issue.  

 Also there is confusion over the word "meaning".  To a computer scientist, an
 expression like `my_bunny.eat(the_carrots, new Lettuce())' "means" or "is
 defined" as its reduction into some theoretical framework like the lambda
 calculus or an object calculus.  To a programmer, such an expression "means"
 that you invoke a function that created an object, pass that new object and a
 locally referenced object to a method of a certain other locally referenced
 object.  This is the normal meaning of "meaning" as opposed to the theoretical
 semantics meaning of "meaning".  This will come up again later in this message.

Needless to say, I would strongly favour the 2nd philosophy (the "Python
philosophy") if I were designing a language for mass adoption.

(Note: it seems like many (all?) of the popular languages, excepting Perl, use
philosophy #2, and many of the unpopular languages use philosophy #3.  I would
be interested to know if Smalltalk uses #2 or #3.)

Okay now here are some concrete examples to anchor the ensuing exhortation:

1.  The difference between functions, classes, local variables, etc.  A
language which follows philosophy #2 makes different keywords like `function',
`class', `var', etc.  A language which follows philosophy #3 comes with
documentation which instructs the programmer that all three of these are really
the same under the hood, so just use `def' for all of them.

2.  Class declaration.

2.a.  Let's say that the underlying semantics that your language designers
prefer is object-based OO (as contrasted with class-based OO).

A language which follows philosophy #2 makes a keyword `class', which is
defined informally to a programmer as "It makes a class, with these components,
etc. etc.".  This keyword `class' is defined in the underlying implementation
as constructing a maker object which has a `new' method that returns instances.

A language which follows philosophy #3 instructs the programmer to construct a
maker object and give it a `new' method that returns instances.

2.b.  Now let's say that the underlying semantics that your language designers
prefer is functional.  A language which follows philosophy #2 makes a `class'
keyword, which is defined informally the same was as above, and is defined in
the underlying implementation to create a function which, when invoked, returns
a list of functions which share a scope.

A language following philosophy #3 instructs the programmer to define a
function which returns a list of functions which share a scope.

3.  Facets.

A language which follows philosophy #2 has a keyword `facet', and comes with
documentation that explains in natural language what a facet means and why you
use it instead of using `public:' and `private:' keywords like you used to.

A language which follows philosophy #3 comes with documentation instructing the
programmer to define an object in the right nested scope and think of it as a

I discussed syntax with MarkM on the phone a couple of months ago, and he
suggested a strategy that sounds, retrospectively, like "#2 with an added
flavour of #3".

He suggested that if the syntax sort of *alluded* to the existence of the
deeper languages features without *forcing* the user to understand them, that
this aids the user in learning the deeper features later, after having already
mastered the shallow ones.

I would like to argue against this strategy on three grounds:

1.  I'm not convinced that it is so effective.  Consider the legions of college
students (myself included) who were forced to learn Lisp for a course, managed
to bumble through it enough to get passing marks, but never grokked the deeper
semantics, even though those semantics were indicated by the syntax.  Consider
also the legions of hackers (myself included), who started playing with Python,
spent a good six months or a year occupying strictly the "top level" of
features without thinking about the "underlying implementation", and then
decided to look into the deeper meaning in order to gain more power, and found
it perfectly natural and easy to learn, even though the deeper semantics are
not in any way called to one's attention when one is learning the "basic"

2.  It sounds like a tradeoff between "better" vs. "more users".  Even if this
"help you learn the deeper semantics" thing *does* work, I suspect that it will
be at the cost of some fraction of potential adopters.  In fact:

3.  I would actually go so far as to hypothesize that hackers (remembering
their experiences in Lisp class in college) are *sensitive* to this kind of
syntax.  I think that many of them, upon realizing that they are expressing
a "normal, straightforward" meaning, such as a method declaration, or a class
declaration, or a for loop, in terms of a deeper meaning which is not
understood by them, immediately think that this language "smells of Lisp", and,
in a sort of immune reaction, adjure it.

I can kind of see their point.  If the meaning of what you are saying is not
primarily defined in the terms that *you* mean it, but is instead defined in
terms of some more "complex" and mysterious semantics, this could give you a
sense of insecurity.  What kind of unintended meanings might I be encoding when
I do this?  What kind of unexpected effects might typos have on this code?  

Now obviously the meaning of E code is *ultimately* defined in terms of the
deepest semantics, but the syntax could serve as an abstraction for programmers
who do not understand *nor want to understand* those semantics.  Like all good
abstractions, the syntax can promise the user "As long as you treat me as
though this abstraction is true, then I will make it appear true to you.".

Ok, so clearly design of language syntax abstraction is more subtle than the
design of most algorithmic abstractions, as you want people to be *able* to
access the deeper features as well as the shallower ones through a unified
syntax.  But the question is: which features get higher status in the syntax
(i.e., the feature can be encoded without using other syntax and explained --
*informally* explained, *not* formally explained!  -- without reference to
other features).

Perhaps one way to look at this is: you know how "E in a Walnut" is going to
introduce some concepts earlier than others, and emphasize some concepts more
than others?  The language syntax itself should at least partially reflect the
same concerns, and prioritize and emphasize the same features.  

I hope this way of looking at syntax design is useful to you, even though there
are surely exceptions, countervailing considerations, mistakes in this essay,
and so forth.  The fundamental issue of how to make the syntax acceptable to
potential recruits is of preeminent importance.