Tue May 4 14:47:36 EDT 2010
Connect consumes inputs, while outputs can observe internal nodes
A dataflow network is a _set_ of processors, where each processor
has a set of inputs and a set of outputs. Therefore, a dataflow
network can be _abstracted_ as a processor.
Binding an input (the "connect operator") removes the visibility
of an internal processing node.
output abstraction (internal node hiding):
Since fanout is possible, all intermediate nodes could be outputs.
This requires outputs to be declared explicitly.
input abstraction (default / constant values):
Similarly, inputs could be defaulted.
So, the trick is to see CONNECT as _input elimination_, making binding
well-defined, and taking the connect operation as a transformation of
The rest seems to be just representation of the input/output/processor
Note that this definition of CONNECT does allow cycles to be
introduced. These could be eliminated by adding cycle checks, and
inserting decoupling delays.
Now, is it possible to make CONNECT build an SSA / lexical memo
network directly? This seems to require some sorting operation.
I'm trying to write this down in types. The main problem seems to be
This seems to be central to Haskell-style programming. It's hard to
create equality between abstract types. This seems to be because of
the need of some external "naming" entity. In Scheme, this naming
entity is just a memory location: each object has an explicit name.
Haskell values don't seem to have such an intrinsic identity.
So, a DFN always needs a node sequencer.
To solve this problem, node identity seems indeed to be core obstacle.
Is this because I'm ``thinking imperatively?''.
Vaguely, whenver this occurs, it seems that I'm trying to introduce
the concept of variable in a way that is only possible using staging:
generating code and interpreting it. Now, in pure functional
programming it is usually possible to leave those nodes as variables
allowing a functional representation where node identity is a