Thu Sep 10 21:28:13 CEST 2009
Algebra of Programs
I ran into two papers about Machanising Fusion .
I watched Backus' lecture on FL yesterday. What surprised me was
that while 1. he emphasizes firmly the importance of having an algebra
of programs: to _reduce_ the power of the object language to be able
to manipulate it algebraically without getting into undecidable
territory, 2. he _still_ mentioned that automatic translation to
efficient representation is a hard problem.
As clearly expressed in , the big finger points to a solution where
optimization needs to be specified.
Maybe I'm oversimplifying, but I have this itch that says that stuff
like this is really about notation, as is often the case in anything
algebraic.. What we need is a DSL for optimization and hardware
mapping of constructs. Something that can both be driven by a search
algorithm / theorem prover AND has a meaningful semantics for an
engineer trying to map an algorithm to specific hardware.
Now, the ``algebra of programs'' approach probably makes it possible
to change this DSL from a dumb collection of choices to be made, to
actual manipulations that can be expressed in language form (theorems
about the relationship between the choices).
I need to do an application keeping this framework in mind. Build the
meta-layers up from the ground.. The image processing / audio DSP
problems should be a good start: algorithms are usually
straightforwardly expressed in some high level parallel form (data
flow), but get complicated if serialization needs to be expressed
I've had this hunch for quite some years now: there is structure there
to be exploited, but there is no tool to capture the structure.
People seem to be very good at solving `application mapping' problems
once they understand fully the source and target domains. I take this
mostly from my own experience in using and abusing fixed-structure
devices in different ways to get to efficient solutions by finding
abstractions that translate directly to assembly language. However,
doing this _formally_ has always been out of my reach due to the
complexities involved. People seem to be good in dealing with
inclomplete maps and leaky abstractions.
Maybe that's the _real_ problem? It's not possible to formalize leaky
abstractions? If your abstraction has too many failing corner cases
that you can't make explicit beforehand, but that are quite obvious
when they occur in manual mapping, the human mind has an advantage
over any formal method. This is the frame problem in AI.