[<<][math][>>][..]
Thu Feb 18 10:44:38 CET 2010

Binary Streams as Stochastic Variables (PDFs)


(EDIT: Mon Dec 27 11:20:35 EST 2010: check [2]: Need to make sure that
multiplication doesn't cause any undesired spectral content to
modulate into the signal band.)



Goal: find a way to look at boolean functions on binary bitstreams
coming from sigma-delta modulators.  I.e. where the bitstream
represents an analog signal that is the result of applying a
reconstruction filter R after mapping the binary signal into the
analog domain.

1. Assumption: we're only interested in the long-term average, so in
   first approximation we deal with stochastic variables, not
   stochastic processes.

2. The ``physical'' variable we're interested in is modeled as the
   expected value of the stochastic variable, which approximates the
   real implementation which is a running average / low-pass filter.

3. As another simplification we assume variables are not correlated.


Using these simplifications, we can work with Probability Density
Functions; a stochastic variable is completely specified by its PDF.

Let's denote PDFs as this:
      a(v), b(v), c(v)      PDF of single variable
      ab(v1,v2), ac(v1,v2)  joint PDF of 2 varaibles
      abc(v1,v2,v3)         joint PDF of 3 ...

Since we use independent variables, joint PDFs can be written as
products: ab() = a()b().

Additionally PDFs of binary stochastic variables can be represented by
a single number \in [0,1] : the probability the variable is "1" (or
"0").  We will abuse this by taking a to mean a(1), i.e. the number a
\in [0,1] is the probability that variable a == 1.

Using this notation, computing expected values of binary boolean
functions is straightforward.  I.e. for the XOR function:

  <+> | 0 1
  ----+----
  0   | 0 1 
  1   | 1 0


  E (a <+> b) = 1 ab(0,1) + 1 ab(1,0) + 0 ab(1,1) + 0 ab(0,0)
              = ab(0,1) + ab(1,0)
              = a(0)b(1) + a(1)b(0)    [independent]
              = (1 - a) b + a (1 - b)
              = a + b - 2ab

To make sense of this result it's simpler to have the binary values
represent {-1,1}.  The transformation F that maps [-1,1] into [0,1]
is:

  F x = (1 + x) / 2

This then yields:

  a <+> b = a + b - 2 a b   implies   (F a) <+> (F b) = F (- (a b))

In other words: the XOR operation is _inverted multiplication_ of
expected values of variables on {-1,1}.


The function <=>, the negation of <+> gives a similar result.

  <=> | 0 1
  ----+----
  0   | 1 0
  1   | 0 1


  E (a <=> b) = ab(0,0) + ab(1,1)
              = a(0)b(0) + a(1)b(1)  [ independent ]
              = (1 - a) (1 - b) + ab
              = 1 - a - b + 2ab


  a <=> b = 1 - a - b + 2ab   implies  (F a) <=> (F b) = F (a b)

The EQV operation is multiplication of EVs of vars on {-1,1}.

It seems it's only interesting to look at equivalence classes under
inversion of inputs and outputs.  See also [1].  I.e. the only other
interesting operation to look at is AND.  Similar functions
(OR,=>,NAND,...) will yield the same result qualitatively by inverting
inputs or outputs.

  . | 0 1
  --+----
  0 | 0 0
  0 | 0 1

  E (a . b) = ab(1,1)
            = a(1)b(1) = a b

So AND,OR,=>,... are multiplications; just in a different domain [0,1].


Conclusion: these binary logic operations give bilinear functions.  On
the [0,1] domain they are binary linear combinations of the 4 terms:
ab, a(1-b), (1-a)b, (1-a)(1-b), where a,b are the
probabilities/expected values of the random variables a and b.


--

Further work:

 * What happens when variables are not independent?  Can the
   correlation itself do anything useful?

For signals close to 1/2 of the range, i.e. represented by an
alternating 01 sequence, it is difficult to be practically
independent.  However, two coin toss processes are independent, so
this should be built-in at the core somehow.  The question then
becomes: how to implement a modulator?  I.e. how to introduce bias
from a uniform random source, given the desired expected value.

I.e. is it possible to take a structured signal and decorrelate it
with a noise process?  Doesn't seem to be straightforward, i.e. XOR or
EQV with a uniform process are equivalent to multiplication with 0.

It seems that one of the a-symmetric operators is needed to create
non-uniform distributions from uniform ones.  I.e. the AND of two 1/2
noise streams gives a 1/4 noise stream.


 * Can we use the apparent dependency on randomness (i.e. from noise
   injection) to transmit other information?

 * What about looking at these functions as ``modulation maps''
   instead of binary endomaps of [0,1] or [-1,1], i.e. as:

                     [0,1] x [-1,1] -> [-1,1] 

   I.e. one of the inputs represents an asymmetric signal, while the
   other one is symmetric.


Some literal Haskell code:

> a `xor` b = a + b - 2 * a * b
> a `eqv` b = (1 - a) * (1 - b) + a * b
>
> f x = (1 + x)/2
>
> a === b = (abs (a - b)) < 1e-14
>
> e_xor a b = ((f a) `xor` (f b)) === (f (- (a * b)))
> e_eqv a b = ((f a) `eqv` (f b)) === (f (a * b))



[1] entry://20090609-145519
[2] entry://20101227-110515




[Reply][About]
[<<][math][>>][..]