Sun Sep 20 17:54:45 CEST 2009

MAP is easy, FOLD is not.

So, when writing data combinators, MAP can be used to fill the gaps,
while FOLD needs special care.  Most commonly the operator that is to
be folded is associative.  This allows for the fold to be broken up
into independent pieces which can be combined in the end.

Comparing this to bananas, lenses, ... [1] the problem is different.
You want to treat MAP specially because of parallellism.  The map
fusion (loop fusion) is like a non-local particle going through your
program.  It has many degrees of freedom.  Another difference is that
comllex anamorphisms are rare in DSP.  They mostly take the form of
constant lifting (combine a constant to each element in a loop
map/fold) or simple weight generation (lines & sines).  The
catamorphism (fold) however is very important, with the inner product
taking the lead, possibly followed by min/max.

Constant lifting corresponds to moving variables out of loop scope.
It seems that the idea of loop scope is going to be an important one.

This idea of loop fusion being a ``particle'' moving around in a space
of constant energy is stuck in my head.  The same can then possibly be
said for fold fusion.

    (map $) . (map @)  =  map ($ . @)

Something not to ignore is that boundary conditions are important, and
significantly complicate manual code compared to truly symmetric map
and fold.

As I mentioned before in [1]: for image processing you want the data
types to be a bit more abstract than recursive types.  I.e. picking an
implementation is picking a data type + recursion schemes that handle
pre/loop/post cases.

[1] entry://../compsci/20090911-125525