[<<][math][>>][..]
Tue Jun 2 14:24:28 CEST 2009

Analog vs. Digital

My original idea started with bringing back uniform error sensitivity
to digital systems: in a digital system, it is possible to get
error-free data transfer with an arbitrarily high probability.
However, when an error does occur, it is usually fatal.

This is in stark contrast with analog communication: errors will
degrade the signal, but are far from fatal.  The information is of a
different kind: there is always some error but "we can live with it"
and a little more error makes a little more annoyance, but no
fatalities are introduced abruptly.

Now, is there something inbetween these things?  Is it possible to use
the signal re-generation property of digital systems with graceful
degradation observed in an analog system?  In other words: contain
errors locally, but make sure that noise that gets promoted to signal
does not have global effect.

This seems to be different from error detecting/correcting codes:
these work well up to a certain noise level where they completely
fail.  This is more about representation of data.  About what a
_number_ really is.

Voting has this property, but it also is extremely wasteful for
representing "don't care".

TODO: make this a bit more formal and formulate the computation
properties as continuous statistics of a discrete (limit->inf) system.




[Reply][About]
[<<][math][>>][..]