[<<][compsci][>>][..]
Sun Mar 7 22:56:47 CET 2010

Writing compilers

The basic idea is that a compiler is a function, i.e. implemented by a
collection of rewrite rules, that relates one syntactic domain to
another.

In order to verify correctness of this ``arrow'' the language needs a
formal model - another arrow - to compare it with.

                comp
    language   ----->  machine code

       |                   |
       | definitional      | machine code
       | interpreter       | interpreter
       v                   v

   abstract domain (i.e. Haskell functions)

That's my idea.  How does [1] do it?  In each pass (14 passes and 8
intermediate languages) the semantics of the language is defined, and
the transformation is proven to preserve the semantics.

I am only interested in one pass: the reductions used in the PIC18
instruction selection.  It looks like I need a Hoare logic[2] for the
machine opcodes, i.e. some kind of typing rules for the state
transitions, and how they can be composed.

So, my question is: in the spirit of tagless interpreters, is it
possible to write down the Staapl PIC18 reduction rules such that:

  * PIC18 Instruction semantics can be encoded in the type as much as
    possible.  I.e. data dependencies.

  * Remaining semantics (i.e. add or sub) is encoded in behaviour, and
    can be verified by quickcheck.

Doing this in Scheme as an alternative to encoding it in Haskell/OCaml
type system is probably also possible, but I would need to make a
checker/inferencer.


[1] http://compcert.inria.fr/doc/index.html
[2] http://en.wikipedia.org/wiki/Hoare_logic



[Reply][About]
[<<][compsci][>>][..]