[<<][math][>>][..]
Wed Jul 4 15:42:53 EDT 2012

Usable spectrum in a Sigma/Delta signal

How does this[1] translate to the SNR of s?  We have the shape of the
noise, but not the maximal amplitude.  I'm interested in SNR to
compute the channel capacity, i.e. how much information can be encoded
in the Fs signal as opposed to the raw bitstream r.  Is there any
capacity lost by this encoding?

It seems that this is not an easy answer, as it involves assumptions
to make the nonlinearity go away.

What about this:

- in r = s + e', the signal component s is negligible.

- e' is highpass, which means it has no DC component, so the
  instantaneous energy is known, and constant: half of the bits are 1,
  half are 0.  Setting 1/2 in the middle, the RMS is 0.5, power is
  0.5^2.

- given the shape of e' and the the total energy, the
  frequency-dependent energy can be computed.

So it seems that as long as s doesn't have a DC component, it is
straightforward to compute the absolute noise envelope, which is
constant as long as s remains small wrt. e'

If s does have a DC component, the linearization doesn't work.  The
more DC there is, the less "room" there is for the AC component.

A better assumption would be to say that s is highpass (we still need
that assumption to compute the total energy) but that it's cutoff is
much lower than that of e'.  Here the total energy (still 50% duty
cycle) is distributed over s and e.  As long as the cutoff of e' is
much lower than nyquist, it seems that the first assumption (setting r
~ e') is sound.

The 2nd assumption can still be used to make sure that s doesn't
exceed the highest point of e'.

What is surprising here is that the presence of a DC component shifts
the dynamic range.  Actually, the same happens in other amplitude-
limited channels.

So.. Given the assumptions above the instantaneous of the high pass
signal (0.5^2) would approximate a white signal, meaning that the
maximum of the PDF is 0.5^2/fs.

This seems to be enough to approximate the signal capacity, using the
part of e' that's below the maximum, keeping in mind that the DC part
is not usable because the linearized channel properties depend on the
DC component, but negligible for computing channel capacity.

( It would be interesting to express all those approximations
exactly..  The exact formula is probably fairly complex. )

So, to top off all the approximations, let's say that the bandwidth is
reduced by x, the oversampling factor, which leads to an increase of x
in amplitude dynamic range.  Plugging this into Shannon's formula[2]
directly gives the asymptotic behaviour in terms of x

         C = B log (P / N)

           ~ B_0/x log ( P_0 * x^2 / N)

which clearly shows that this is a fairly expensive technique when
looking just at the information content of the channel.  The reduction
in capacity is

               log x
               -----
                 x

(where we ignore the constant power x^2) which corresponds to the
intuition that we're using bits to represent individual events E_i,
and not "sums of events" which only needs bits in the order of log
(sum E_i).

This high redundancy makes it plausible to believe that the effect of
bit errors is minor.  It seems to indicate that the "shape" of the
signal is largely irrelevant, so we can probably use that to our
advantage (decorrelation to allow computation with such signals).

So... say x = 100000 which is 5 decades, which is 100dB dynamic range.
The information cost of this is about a factor 6000, meaning that only
1/6000 of the information is actually useful.

That's a lot of room to put some extra stuff!.

Think of this: to change a bit from 0<->1 adds/subtracts energy that
can be seen in the base band (impulse response of the reconstruction
filter) but to switch the position of 2 adjacent complementary bits
has almost no effect in the base band as this blip has no DC
component.  As a result it is probably possible to add random
permutations to the output bits without this being noticed.

Anyways, this also makes it clear why it's probably best to use some
steeper filters as they bring the noise floor down.  With o the order
of the filter this becomes:

              o log x
              -------
                 x

NEXT: Revisit the logic operations on (non-correlated) S/D signals +
check HF noise modulated into base-band.


[1] entry://20120704-134439 
[2] http://en.wikipedia.org/wiki/Channel_capacity




[Reply][About]
[<<][math][>>][..]