Mon Aug 31 15:50:06 CEST 2009

Constraint Programming


In good in-house tradition, I'm going to try to re-invent it before I
look up the implementation again in SICP[?] or Steele's Thesis[?].

The idea is the following: constraint networks are multi-way functions
built up from primitive constraints.  You can do the following:

      * compose networks
      * assert inputs / receive satisfiability errors
      * query outputs / receive un-asserted errors

The tricky part is going to be to stage the control flow into a
predictable real-time C-program, but first let's write a dynamic
version and see where local algorithms fail.

Start with 2 primitive constraints (possibly the only ones I will

   a + b + c = 0
   a * b * c = 1

To make constraint satisfaction into a local algorithm, a rule
needs to be an active element: when receiving an assertion it
needs to:

      - propagate it if possible
      - store it if no propagation is possible
      - raise an error if there is a conflict

In general, a rule is an ordered / named list of nodes, and a governor
that implements the behaviour above.

Nodes in a rule can be asserted or floating.  Let's represent asserted
nodes by a number and non-asserted ones by #f.

A rule -> node link needs to be bidirectional to propagate values
through the network.  For this a `slot' structure can be used,
referring to a rule and an index (all rules have position-encoded

Control Flow

   - a node is asserted, propagating a signal to its associated rules.
   - a rule is asserted:
        - underdetermined: stop
        - determined: propagate
        - overdetermined: error


Q: Find out why this can't solve sets of equations. (or can it? what's
   the relation with triangulation?)

A: One answer is that it's possible to have N inputs and M equations,
   which will give a propagation when there are N-M inputs defined.

OK.  Control flow seems to work.  Now, what can we actually do with

* Adding linear functionals is a trivial extension.  Adding general M
  x N systems requires some more code-gen magic to turn the equations
  into directed equations, but doesn't add any significant other

* In general, adding a particular N nodes, M<N equations feature is
  straightforward as long as ``directionalizing'' the equations into
  functions is possible.

* If the inputs and outputs are known, it reduces to a static I->O
  DFL program.

So, I wonder..  What value does this add?

   - it abstracts control flow (directionalizing equations + sequencing)

What about the following form:

   - set of equalities
   - set of inequalities + actions

Abstract Interpretation

Let's try to capture the static part using abstract interpretation.
Essentially, turn the current implementation into a staged macro

An interesting point: can you stage `amb'?

An interesting property of directional constraints is that you don't
need to make choices before you do tests.  In discrete constraint
satisfaction problems you do need to do that.

Maybe I can make a combination of both?  Continuous and discrete
constraints, and use a staged `amb' to compile it.

So what you do in the abstract evaluation is working with the
_availability_ of parameters.  Then you can serialize the control flow
and cast it in stone, leaving the _values_ of the parameters

This means that DFL probably becomes a special case of constraint

Let's call it ``staged constraint programming''.  Or ``staged

So..  What is the abstract version of a constraint?  It's really
simple: a constraint propagator.

So, in the staged/ae version there are really only 2 problems:

    - constraint propagation based on availability, resulting in a
      sequential program.

    - constraint -> function conversion based on the sequencing
      (constraint directionalization)

This pattern is quite neat: starting with an exotic control flow
paradigm, fixing control flow through staging, which then allows
binding optimizations (storage) and compile-time evaluation.