Sat Mar 13 11:12:37 CET 2010

Logic simulation : the problem is state.

I'm starting to think that the real problem is abstraction on the meta
level.  You need abstracted semantics and proofs (types) not just
abstracted behaviour.

The point of this post is to make that explicit somehow.

Example: suppose F : (x,y) -> z implements addition by chaining
together a bunch of NAND gates.  What you want is a theorem about the
high-level behaviour (addition) instead of the low-level behaviour
(inverted conditional propagation of binary values).

I start with a functional representation of state machines: we ignore
state by only using networks.  Later we introduce state by 'fixing'
networks to given state.  So, tattoo this on your forehead:

                        (i,s) -> (o,s)

A machine transition maps an input to an output while changing a
state.  So state itself is nothing special, we just have multi-in
multi-out (MIMO) functions.  Now, with that out of the way, the
remaining problem is to take computations of the form

                           s1 -> s2

that model state transition functions, and chain them together.  More
specifically, how to manage sub-states and derive static properties
from compositions (clobbering, commutation, ...)

The fundamental problem in composing MIMO functions is _adapting_ the
inputs and outputs so they can be chained.  I.e. lift the functions to
an appropriate embedding in a larger space.

To make the problem purely about connectivity, allow only a single
primitive operation: the NAND (or NOR) gate.  This will make
representation language simpler at the expense of larger networks.

The representation will be binary directed graphs.

For NAND networks, the types are really just sets of nodes.  Lifting
types is nothing more than adding additional nodes to input and output

So what is necessary?  

  * STAGING: Distinguish between functions (i.e. network macros, stage
    1 entities) and instances (connections between nodes, stage 2

  * NAMING: Both nodes and template I/O are named. These are different
    entities.  The latter are type names (stage 1) while the former
    are instance names (stage 2).

Now the trick is: there are no instances!

Cfr. Haskell: there is no state!  Only transitions.

I.e. an instance is a template for which some of the external (and
internal!) nodes are named to turn a problem that analyses state
(those nodes) into a problem that analyses state transitions
(computational paths between nodes).