[<<][meta][>>][..]
Mon Jan 17 12:29:42 EST 2011

Folding Code : Giving Up Random Access

I once glanced at a paper that was comparing monadic vs. "fold"
approach, I believe saying that both are two sides of the same coin,
one focusing on computations, the other on values.  I believe it was
this one[1].

So I wonder, is there a way to express the memoizing issue as a "fold"
instead of a monad?

This whole problem doesn't seem to "stick" in my mind.  I find it all
very non-intuitive.

It seems that the "fold" approach should be based on concatenative
code, using combinators for variable fan-out.  EXPLICIT FANOUT.



So it seems that this is quite a deep conflict:

  * Lexical scope is very handy as it provides _RANDOM ACCESS_ to
    nodes in a computation network.  However, once reduced this
    structure disappears completely, which complicates specification
    of code generators through abstract interpretation.

  * (Flat?) combinators can encode the same computation network
    information, but are easier to stage because they express all
    sharing explicitly, i.e. through DUP.


What is so tickling about this is that in the case of an applicative
language, one introduces a sequential mechanism to tame the random
access scope.  On a high hand-wavy level this seems so obvious, but
the nitty gritty details are difficult.  But I guess it's safe to say
the following:

    Internal variable bindings are a structural element of pure
    functional code that cannot be recovered from its I->O behaviour.

This also makes the CPS approach so much easier to understand: every
time a variable gets bound, we want to intercept it.  This can be done
by exposing it through a function binding.

In a flat concatenative language this is trivial: the code is already
in CPS and we can observe intermediate states without any problems:
just modify the "interpreter" or "fold".


[1] http://www.springerlink.com/content/768043006044675p/



[Reply][About]
[<<][meta][>>][..]