[<<][meta][>>][..]
Tue Apr 2 10:09:27 EDT 2013

Setup code : duplicate input network

The solution seems to be to simply duplicate the whole input network.

However, this only needs to happen when the input network is a signal,
not when it is a parameter.

So I wonder, where is the problem actually created?
External inputs are used in two different ways: as param and as


What about this:
- construct a lazy version of the dependency network.
- gather all state setup out and local binding nodes
- force calculation of these -> creates all pre-loop bindings


Hmm... needs some more experimentation.

Looking at the stream semantics, what really happens is that at each
feedback function, the inputs are treated as streams and subsampled.
This needs to be done exactly the same in the array language.

How to implement subsampling?

Two problems are interfering:

- State setup, and keeping track of state variables so the proper
  loops can be generated in C.

- Subsampling of inputs.

Tough..

What does the C code generator need?  It just needs node names.

s,i,p,t_endx   # user-provided input + state from previous run
s,i,p,l        # state setup nodes + intermediary 
  t            # loop index
  s,o          # output and state output


So I cleaned up the t-lambda form a bit.  Now for the real work.

The problem: we're interleaving the creation of the setup and update
bindings, which leads to some of the setup operations depending on
update bindings.

What needs to happen is that these 2 phases need to be separated,
probably by creating the 2 functions (setup and update) explicitly.

What about that?  Create a semantics that splits the setup and update
functions into two different programs with some metadata?

So.. I went through the motions to make the node generation lazy.
Time to test it with the cases that failed before.

Test case in: test-dual-rate2


I think I found a hack: after evaluating the nodes that the main loop
depends on, the boxes associated to the main loop should not have any
nodes in it.  If so, those can be moved up.

However, that doesn't solve the duplication problem.  What is needed
is to effectively duplicate the operation.  Essentially, run the whole
program twice, properly binding the nodes.

Binding the nodes is going to be tricky, since they are not exposed
explicitly.

Let's just give it a try.

First, some cleanup:

The names "setup" and "update" are then maybe not correct, since the
nodes will get woven, and some tricks are necessary to separate them.
If there are going to be tricks, it's probably best to implement them
differently.  Maybe it is easier to mark nodes as local on a per node
basis, but that doesn't solve the state initialization problem.

However, if a state node is pinned as sub-sampled, maybe it will work?

More fermentation needed...  There are 2 problems:
- State-composition
- "these nodes run at a different rate"


First evaluation: compute only state output and local nodes.

Really, what this does is to extend the state.  The "setup" routine is
just like the "update" routine, but it has more state, which are the
local nodes.  Using this approach, it should be straightforward to
move the composition elsewhere.

Maybe the problem I'm running into is just ill-defined.

No, we just need to stick to the original meaning from ai-stream.rkt :
the local nodes only depend on subsampled inputs, and produce a
constant stream.  In ai-stream.rkt this doesn't need extra
bookkeeping.  However, when we want to optimize it such that the
constant nodes get optimized out of the loop, some untangling is
necessary.  I.e. some code does need to run multiple times.



[Reply][About]
[<<][meta][>>][..]