Tue Oct 2 11:09:29 EDT 2012


The good thing about dynwav (generating wave tables for wave table
playback) is that it decouples instantaneous phase/time from
instantaneous frequency.  Something which isn't possible on the
analysis side, meaning information is lost in the process.

The bad thing is that it is too flexible, so the problem to solve
really is to make generation of spectral shapes simpler, and to tie it
into the current playback pitch.

One of the effect's I was thinking about is a smooth morph from
unimodal to bi-modal spectra, to make it more voice-like.

The most important technical problem is T/F allocation, i.e. size of
the DFTs based on the pitch: there is no point using a large wave
table for high-pitched sounds, as most of the higher pitches need to
be zeroed out to prevent aliasing.

Bringing this to market, it might be good to keep the engine internal
and not make the frontend too complicated.  Just add plugins with
synthesis algos that perform the spectrum update, so the focus is the

pitch -> CHUNKER -> out

Based on pitch, the chunker will use different chunking sizes.  The
idea is to keep the table size similar to the time update.

- chorus / flanger: almost for free as multiple readouts

- chords: meaning same-spectrum chords.  these can be performed by
  sorting the pitches in ascending order, generate them in order, and
  pre-filtering the spectrum to avoid aliasing.