Tue May 4 17:43:23 EDT 2010

Much ado

But I'm not really getting anywhere.  I've got a couple of somewhat
meaningful abstractions, but no way to glue them together.

So, practically, what is the problem I want to solve?  Given an opaque
pure function over the Num class, lift it to a structure that accepts
inputs and outputs.  These inputs and outputs are then components of
an array.

Why do I need this?  To combine numeric grids and (stateful) iterative
processes, the basic ingredients of numerical math, abstracting the
kernels as pure functions so they can be examined in other settings.

   - function over Num
   - representation of input / output nodes
   - intermediate node generator

Compose these.

Problem (already solved) when the function is opaque, you loose node
sharing information.  This can be rebuilt (and more) using a term
equvalence relation.

Starting from more explicit composition information, node sharing
information can be recovered directly without the use of tricks.
Since I need the opaque representation, it might be best to stick with
node sharing recovery.

So, today I went all over the place, but because I don't have full
control over composition (as it is hidden behind opaque function
objects) there is no real use for all these different representations,
right?  Or could such a more explicit representation be built as the
result of abstract evaluation?

Practically, it is probably best to model the C code generator in
Arrow form with explicit "output inputs", and an explicit node

So "compile" yields :: 

type Input = String
type Output = String
Num a => ([a] -> [a]) -> [Input] -> [Output] -> CodeGen

Where CodeGen takes care of internal node generation.

Why do I get the feeling that there are too many arbitrary choices,
and I'm not getting the real idea.  Maybe this just needs a
distinction between assign and define (alloc), where assignment to
variables is allowed once (alloced by caller).

Block [Input] [Output] [Temp] [Assigns]

Morale: allocation needs to be made explicit (in the form of
declaration).  The basic unit will then become the block.

The Assigns are essentially "block calls", i.e. macro nesting or
procedure calls are possible there.

float fun(float dummy) {
    float r0 = mul(ar, br);
    float r1 = mul(ai, bi);
    float r2 = negate(r1);
    float r3 = add(r0, r2);
    float r4 = mul(ar, bi);
    float r5 = mul(ai, br);
    float r6 = add(r4, r5);
    out0 = r3;
    out1 = r6;


float fun(float dummy) {
    float r0, r1, r2, r3, r4, r5, r6;
    r0 = mul(ar, br);
    r1 = mul(ai, bi);
    r2 = negate(r1);
    r3 = add(r0, r2);
    r4 = mul(ar, bi);
    r5 = mul(ai, br);
    r6 = add(r4, r5);
    out0 = r3;
    out1 = r6;

The main translation is then memoterm -> block, given input and output
names.  Actually, assignment is already a block (proc?) operation with
multiple inputs and one output.

Esentially, all primitive functions are trivially lifted to
application + assignment, with allocation performed explicitly
somewhere else.