Sun May 30 13:33:15 CEST 2010
Cleaning up PhD research papers
I'm keeping general introductory papers, and audo synth + FX papers.
For sinusoidal modeling, Petre Stoica seems to be a good starting
point for generic approach. The other direction was Sabine VanHuffel
for the fast algorithms. For wavelets it's Daubecies and Sweldens,
I'm keeping some introductory papers.
I'm throwing away the paper forms of specific ad-hoc papers about:
- Blind source separation (statistics based)
- CAS (computational source separation: perception model based)
- Computational Sciene Analysis
- Matching Pursuit (iterative filtering)
- Audio Coding
- Sinusoidal + complex exponential modeling
- Wavelets + Applications to approximate LU
I was not able to integrate most of that knowledge during my PhD years
because of the many ways to characterize errors (what mathematical
framework to use to express the modeling problem) and the
non-linearity of the resulting optimization problems, which makes
practical comparison quite difficult. Possible variations: amplitude
or amp+freq estimation, polynomial phase, optimality (matching
vs. linear models + error), noise coloring, etc.
I'm not keeping Matching Pursuit papers: this method is too ad-hoc,
and forgive my arrogance but I think I can re-invent most of this
technology if I'm ever in need of it.
I'm not keeping the sinusoidal modeling papers (peak picking, etc..).
Same story as with MP: too ad-hoc and re-inventable.
I'm limiting myself to more mathematically meaningful structures.
I've vowed to not set a foot in the theatre of perception ever again!
(I.e. speech recognition: most of this technology needs extra
information about how the brain works; as it is that what we want to
mimick in the first place.)