Thu Jul 17 13:25:40 CEST 2008

Walid Taha RAP video

Jan 22 2007 @ google:

Research question: What are the high level abstractions that can be
used to keep control over resource use?

  - support expressive abstractions
  - ensure safety by static analysis
  - don't let this get in the way

  - multi-stage programming (MSP)
  - reactivity (I/O events)
  - advanced type systems

( Staapl does MSP in a traditional untyped / partly dynamically typed
  way, without special tools for reactivity and static type
  analysis. Contrasting principle: get MSP to work first in a simple
  paradigm, add static tools later using DSLs. )

Ideas behind MSP are old. The new approach is to combine this with
static tools. Reactivity is combined with MSP by creating program
generators for reactive programs with static guarantees.

Essence behind typed MSP: extend with types that are
  - delayed value
  - annotated what kind of delayed value

( An essential concern that seems to be solved by MetaOcaml is
  variable capture. The lucky thing about Staapl is that this problem
  is avoided: there is never any confusion about binding of values: in
  the pattern definition language standard Scheme lexical binding is
  used, while in composition, there are no bound names: all is
  point-free. The big disadvantage of course is that a stack language
  has no parameters. For multi->multi DSP code this is a
  problem. However, FP like array processing languages can be embedded
  in a similiar point-free style, replacing the stack with array
  structures that also can be re-used in every step. The essential
  insight is that it's not stack computing that's important, but
  point-free threaded state that gets discarded after each function
  application. )

What pops out of the FFT example in the talk is the use of
mathematical properties in the generation of the code: algebra of
programs, where the program in this case is ordinary algebra :)

Reactive programming: E-FRP, a scaled down version of FRP from Haskell
which compiles to event-loop C-code.

Linear types: values can be used only once (consumed). Hoffmann
(LFPL). The idea is that in pattern matching, on deconstructs a cons
cell, which is then passed to the RHS where it can be reused:

 cons(x, xs) at d -> cons(1, xs) at d

Indexed types: a bit like polymorphic types, but with parameters that
have values, i.e. lists of a certain size. This goes pretty far: you
provide proofs that the type checker checks. This can be very
specific: nasicly, the types could be made to perform the whole
computation completely in the type system.

EDIT: look at dependent types, Pierce p. 462

Onwards, maybe this is interesting: gradual typing:

( About domain specific knowledge and rewrites: basicly, these are
  theorems about the data types and operators. This might be
  abstracted by generating parameterized theorems, but then those can
  probably be specofied as more complicated rewrite rules. One
  possible step is to start from equations, and distill directed
  rewrite rules. )