[<<][staapl][>>][..]
Sun Feb 3 10:43:19 CET 2008

reflection

i was thinking yesterday about macro unification, and wondered whether
it might be better to go back to the accumulative model for name
resolution / redefinition.

the main problem before was that compilation of code had side-effects
(definition of new macros in the NS hash), which made it impossible to
evaluate code for its value only. however, there is probably a way to
put this accumulative behaviour back, by taking the assembler into the
loop: let the asm 'register' the macros.

the REAL problem i'm trying to solve is still macro generating macros
and the generation of parsing words. both are a opposed to declarative
code model, but in the end, the model isn't declarative at all.. it's
a bit of a mess in my head now.

GOAL:

      i need macro generating macros: limiting the reflective tower in
      any way will always feel artificial.

how to do that?

      * accumulative (image model) is the simplest, and the original
        way of dealing with this problem. however, it doesn't give a
        static language.

      * declarative (language layer model) is the cleanest way of
        doing this, but requires some overhead that might look as
        overkill.


can we have both? the declarative approach needs s-expr syntax to be
managable. it won't be Forth any more..

let's see.. image model: simplest, highly reflective forth
paradigm. declarative: cleanest for metaprogramming purposes.

i guess i need to isolate the exact location of the paradigm
conflict. what do i want, really?

GOALS:

  * generating new names (macros) should be possible within forth
    code. currently, the only way are the words ':' and 'variable'.

  * cross reference should be possible. this currently works for
    macros, because they use a two-pass algorithm (gather macros
    first, then compile the code) and works for procedure words, also
    because of a two-pass algorithm (ordinary assembler).

  * linearity in chunks should be possible, which is the current
    model.

questions from this:

  - is it possible to unify the 2 different ways of emplying a 2-pass
    algorithm for cross-references?

  - how to move from a fixed 2-layer architecture (macros + words) to
    an n-layer architecture. is this doable without a language tower?
    is it desirable? (is reflection really that bad? does it conflict
    with automatic cross-reference?)


the more i let this roll around, the more a certain light goes to this
solution: split the problem in 2 languages. use a reflective forth
which 'unrolls' into a layered language description, and a static
layered s-expression based language that uses the same macro core.

this gives the convenience to use forth syntax and the reflective
paradigm, and at the same time the flexibility to use the language
tower when reflection is too difficult to get right, or the automatic
layering doesnt work..

so, the current question becomes: can the GOALS be kept by moving back
to a completely reflective machine (including parser!) which unrolls
automatically?

remark: it looks as if i really need the equivalent of 'define' which
would be really 'let'.. it all seems to boil down to scope (Scope is
everything!). a forth file should be transformable into a collection
of definitions and macro definitions. it probably makes a lot more
sense to see the dictionary as an environment which implements the
name . value map of a nested lambda expression.

let's see..

   the current model (macros are compositional functions) is really
   good. the remaining problem is scope: when to nest (let*) and when
   to cross-ref (let-rec).

another idea.. instead of looking from the leaf nodes and building a
dependency tree, what about starting from the root (kernel) node, and
build an inverse dependency tree? the linear model is the intersectin
between the two.



[Reply][About]
[<<][staapl][>>][..]