[<<][staapl][>>][..]
Thu Aug 27 09:36:02 CEST 2009

Rewriting in a nutshell

This is how I understand it now, all in the light of language
transformations.

- If there is a unique rule to apply in every case, rule application
  is a _function_ and you end up with a small-step semantics.

- If there is more than one possible applicative rule, you no longer
  have a function.

  However, if your set of rules is _confluent_ (like the lambda
  calculus) then you can _construct_ an algorithm that _is_ a function
  by simply picking a reducable term (that might be favourable in some
  sence, or not, but the rule you pick is otherwise not essential to
  the final result) i.e. the graph reduction algorithm used to
  implement Haskell.

- If your rule set is non-confluent but monotone (no loops) you
  essentially no longer have a unique result, but reduction is still
  possible.

  This can still make sense if there is a way to project the results
  onto something that gives them back their uniqueness.  I.e. if you
  have a an onto `meaning' function that maps syntax -> semantics.  If
  there might be ``more than one way to do it''.  To turn this back
  into a solvable problem you need an extra measure to sort out the
  multiple (syntax) results.  I.e. pick the one with shortest length.

- If the rule set is not monotone (results can grow) you can't do much
  with it from a reduction p.o.v.  I.e. such rule sets are more akin
  to BNF descriptions of grammars and are more useful to use ``in the
  other direction''.



Summarized, this is:
   - small-step operational semantics
   - lazy evaluation
   - program optimization
   - grammar specification



[Reply][About]
[<<][staapl][>>][..]