Sun Mar 14 13:58:57 CET 2010


What about this:

  1. Build the "specification by compiler" semantics as a result of:

     - Exactly specify the target machine semantics in terms of logic
       operations and registers as a tagless specification.

     - Provide multiple interpretations: compile, interpret, partially

     - Map forth -> machine language directly.  This mapping
       (specification by compiler) does _not_ perform any reductions
       by itself: all reductions are a consequence of the exact target

  2. Relate this semantics to a simplified semantics of the forth in
     terms of an idealized machine.

Then use testing or possibly proving to compare the two semantics.

By only using the exact target semantics to perform optimizations, it
should be possible to exactly identify each "fuzzy" translation.

So the question is: is this useful?  Partial evaluation of machine
language?  It seems that all the work is then transformed to the
partial evaluation step: which reductions are possible.  It probably
becomes increasingly more difficult to specify correct

I.e. in Forth it's easy to see that 1 + 2 + is the same as 1 2 + + but
once the stack is not there explicitly (i.e. only as a pointer
register), how do you express this condition?  On the machine level
there is a big difference between the footprint of spilling results to
memory and performing the optimization at compile time, but on the
source language semantics level we don't care.

This needs a machine model with some abstract additions, such that
additional rewrite rules can be proven to maintain a certain 'view' of
the semantics.

                     Squinting at machine code?