Sun Jan 27 23:16:19 CET 2013

Building an audio synth DSL: Space and Time

Signal processing is working with number grids.  For audio synthesis
and processing, the algorithms can be split into two major classes,
mostly distiguished by the difference between space and time.

* Output feedback systems.

E.g. IIR filters.  These algorithms have mostly 2 dimensions:

- time, highly a-symmetric: past is known, future is not.

- object multiplicity (equalizer with 4 sections, mixer with 10
  channels, synth with 20 oscillators, ...)

Loops over these two kinds of dimensions are also different.

Loops over time tend to carry with them an amount of filter state
which is large compared to the dimensionality of the input/output they
process.  I.e. they are "fat".

Loops over space are mostly "map" operations: large sections of
dataflow that are largely independent, to be combined 99% of the time
by a huge summing operation (very little state is "threaded" along
spatial dimensions).  The summing operation is moreover associative so
there isn't even a necessary order.

* Block processing systems.

These split input/output signals in (overlapping / cross-faded) blocks
and process each block separately.  Such algorithms are more spatial.

Often there is some tracking of state going on between different
blocks creating some a-symmetry, but this is not where most of the
"connection mess" is concentrated.

Basic conclusion is that it makes sense to handle space and time
differently, with different abstractions.  This boils down to:

 - TIME: behind-the-scenes output state threading that conceptually
   lifts the "atom" from scalars to streams.

 - SPACE: focus on all dimensions being somewhat "equal" and
   symmetric, i.e. work with `map' and `fold', but don't specify any

In short: think spatially, but lift out the time dimension for causal
streams, and implement causal processors (output feedback) separately.