[<<][meta][>>][..]
Fri Feb 26 10:46:09 CET 2010

Converting rational functions to difference equations

1. They need to be causal, which means they need to be in normal form:
   only positive powers of z.

2. We need an algebra of rational functions that supports
   normalization, multiplication, addition, and partial fraction
   expansion.


The trick seems to be to immediately introduce memoization when a
rational function is converted into a stream.  Look at the fibo memo
here[1].

It seems to be quite tricky!  What you want is to associate each node
in the stream with a couple of delayed versions of itself to be able
to compute the next entry, however doing this in a way that doesn't
keep a lot of garbage around doesn't seem to be trivial.

I.e. if a stream refers back to itself, the recursion needs to stop at
some point, so what you need is:

  - the output/state stream
  - a computation that refers back to previous versions of that stream

> y zy = let y' = 0.9 * zy in  y' : y y'

*Main> take 5 $ y 1
[0.9,0.81,0.7290000000000001,0.6561000000000001,0.5904900000000002]


Can that be written in a style that doesn't use explicit state
passing?  Is it that necessary?

> y y1    = let y0 = 0.9 * y1            in y0 : y y0
> x x1 x2 = let x0 = 0.9 * x1 - 0.5 * x2 in x0 : x x0 x1

The idea is that in passing state down the tail, some of the memory
values ``fall off the end''.  Simply pushing an element to a list would
work, but would retain a lot of state.

Let's do it anyway; maybe it's possible to recover a kernel routine
through abstract interpretation?

> z zs@(z1:_) = let z0 = 0.9 * z1 in z0 : z (z0 : zs)

Note this computes the forward stream, but passes the backward stream
downward!  I.e. it is a kind of zipper.  More general, with coefs:

> series coefs izs = s izs where
>   s zs = let z0 = foldr (+) 0 $ zipWith (*) coefs zs
>          in z0 : s (z0 : zs)

This makes it straightforward to translate a rational function to a
series (currently ignoring nominator).

> toList (RatFunc n (d0:ds)) = 1 : series coefs (1 : [0..]) where
>    coefs = map (\x -> (-1) * x / d0) ds

Separating the iir combinator from series this becomes:


> iir f izs = s izs where
>     s zs = let z0 = f zs in z0 : s (z0 : zs)
>
> series coefs = iir (fir coefs)

The f taken by iir is has type :: [a] -> a.  It filters the history of
the output, i.e. the input array is the _reverse_ of the generated
array.  (Renamed iir to ar).

Next: arma.  Or better arx: AutoRegressive with eXogenous input.  Arx
seems to work as:

> arx p = s where 
>     s ys (x:xs) = 
>         let y0 = (p ys) + x     -- filter + add input
>         in y0 : s (y0:ys) xs    -- build stream and pass reverse stream to s

simply adding the input.  Now ma is similar, but the naive
implementation isn't what we want:
 
> ma' p = s where
>    s xs = (p xs) : (s $ tail xs)

This is an anticausal filter.  We want a causal one.  This is similar
to the arx combinator, but it builds a history (reverse stream) of the
input, not the output.

> ma p = s where
>     s zs (x:xs) = 
>         let zs' = x:zs 
>         in (p zs') : s zs' xs

This completes the combinators.  Both ma and arx have the same type
signature; except that ma isn't necessarily class Num because it
doesn't use any arithmetic.

  Num a => ([a] -> a) -> [a] -> [a] -> [a]
           
           filter        hist   in     out


Alright.  Code seems compliete: arx,ma,fir,applyRatFunc,ir

Now the question is: how to translate this to time-varying filters?
Instead of having a single predictor, this could be represented by a
stream of predictors.  Seems straightforward.

Additionally, histories should be associated to the predictor, i.e. it
consumes a history and an input value and produces a history and an
output value.

[1] http://www.haskell.org/haskellwiki/Memoization




[Reply][About]
[<<][meta][>>][..]