Tue Jun 17 23:39:50 CEST 2008


Why am I doing this? It is really about the language, about its
algebraic feel. Maybe I should be honest and keep that as the only
real reason. It's like Legos. It clicks.

Then there are explanations of why I might like it:

  * Concatenative languages span a wide spectrum in a useful way. This
    allows me to use similar paradigms from the very low to the very

  * One can get far without closures (which take the form of curried
    quotations created at run-time).

  * Partial evaluation is simple for functional concatenative
    languages: scopes don't get in your way.

  * Imperative concatenative language can have a large functional

  * Linear memory management becomes non-intrusive.

Some fragments of correspondence. Question about breaking new ground
with the Staaple approach:

Well, I know for a while now I need some form of compile-time program
specialization that can turn higher order functions into specialized first
order loops. The real question is in how to simplify the programming
language such that this the problem of writing the compiler can be solved
by me in a limited time. Doing it only with an interpreter + specialized
manually crafted C core routines (like PF, Pure Data, Supercollider,
Matlab, ...) is not powerful enough.

Untyped lambda calculus is too general to solve the problem with a simple
compiler. Typed lambda calculus works better but such a language is not so
straightforward to implement. So I'm looking at something first-order with
higher order macros, closer in spirit to APL, Backus' FP and C++
templates than lisp.

The only ground I broke is that I ended up with a non-intrusive way to
combine compile time operations and run time operations in one language
without semantic problems, simply by taking a functional programming view
where evaluation time might be thought of as unspecified.

Concatenative macros are a very natural way to do template programming,
because name bindings don't get in the way. Concatenative form can also be
easily transformed to nested expression form so when I need data flow
analysis I can do it, but for some program transformations it's really
easier to keep it concatenative. Code is more 'algebraic' and less 'logic'
in that form if that makes sense at all.. Lists instead of trees.

What I have now is still manual: there is no automatic loop inlining
happening. I'd like to figure out if this is necessarily a part of
the language (1st order language + some 2nd order functionals) or if i
can automate it, so it becomes a language with higher order semantics
preserved in case the opti doesn't apply.

So, while I find it interesting, I am getting in territory where I should
be careful not to be too general, and try to stick to the problem of making
a language wich is very close to machine language, but has access to higher
order constructs. I'm already there: the macroassembler on steroids idea:
for PIC18 the bottom layer concatenative language almost maps 1-1 to
assembler. It's automatic code juggling part on top of it that is giving me

About writing beginner languages:

I did some workshops with forth now, and i find people pick it up pretty
fast. The real problem is not language though. Some languages go smoother
in the beginning than others, but I found the real problem to be the point
where you leave simple scripting (filling in parameter values) and code
composition enters the picture: how to divide and conquer. I think a
beginner language should stretch the scripting part as long as possible,
but i sort of gave up on that idea. It only makes hitting the abstraction
wall more painful. What i got a bit discouraged about is that more often
than not, no matter what you try people like to stay in that scripting
area. I don't know if there's a way to trick people into crossing that
barrier unknowingly. Did you run into something like this with scheme?