Wed May 6 13:52:21 CEST 2009

Graceful degradation of integer values

Transmitting binary digit-encoded values is problematic in the sense
that not all digits have the same noise sensitivity in terms of the
encoded integer value.

I.e. binary 1100 = decimal 12
     binary 1101 = decimal 13  (diff = 1)
     binary 0100 = decimal  4  (diff = 8)

Is there a way to encode a number such that each flipped bit will
cause the same (or similar) error in the integer value, and not too
many extra bits are used.

One way to do this with lots of redundancy is to use a PWM
representation.  I.e. for 4 bit values, this gives 16 bits
representation and a lot of redundancy.

However, can this principle be used in a way that isn't too costly?

0 000
1 100 010 001
2 011 101 110
3 111

Using bigger numbers the redundancy explodes quite fast.  Is there a
way to "flatten the bulge" in the redundancy curve?

This is related quite a bit to the concept of entropy.  Patterns where
the numers of ones and zeros are equally distributed are more common
because there are more variants.

What about using 100 010 001 as representations of 1,2,3.  That way
bit flips will only jump between "bands" and no longer all over the

0 {000}           {0}
1 {100 010 001}   {1,2,3}
2 {011 101 110}   {4,5,6}
3 {111}           {7}

This produces a lattice structure.

Now, is there a way to do this recursively?  Convert each number to

100 10
010 11
001 01

The same structure is found the the other section since it's simply
the inverse.

Hm.. I don't see it.

What I see is that numbers near the middle bands are all equal in some
way.  It's as if this says that only black and white are really
important (they are outliers), and the rest is not so special.

It's the binomial expansion of (1 + x)^n

1 1
1 2 1
1 3 3 1
1 4 6 4 1

Can this be used in control systems?  If the errors are small, we
don't really care much about their exact value, just their sign and
approximate size.  If the errors are large, we really want to know
them well.

Suppose a we have n wires carrying a single signal.  If they're all 1
or all 0 that's significant.  If they are equally distributed that's
not significant.

So, this is quite a natural representation of "normalness".  Flipping
one bit won't do much, but the ensemble has a meaning that can be
interpreted as a number.

Now, can we count with these numbers? Or at least treat them as values
to which we can apply functions?

Let's define this better.

    - A node is a binary function of time.

    - A number is a set of nodes.

    - The sum of two numbers is the union of their sets.

    - The weight of a number is the cardinality of its set.

    - A "reduction" translates a number of high weight m to one of low
      weight n, m > n.

Computation seems is hidden in the reduction operation.  Let's define
one for cardinality 2->1

This should be a transition of classes, probably random to eliminate
arbitrary choices.  The "outer" classes will remain while the inner
clas picks a random choice.

2 ----> 1

00    | 0
01 10 | 0 1
11    | 1

Similar for the next in line.

3 ----------> 2

000         | 00
100 010 001 | 01 10
011 101 110 | 01 10
111         | 11

Now, can reduction be implemented with an electronic device that has
some kind of balanced internal noise source?

Now..  3->1 can be implemented as a deterministic majority gate.  But
it seems that the big idea in the proposed computer is its
indeterminate nature.

So, with completely fair n -> (n-1) transitions, what is the
probability of an n->1 transition?  

Wait.  The simplest gate is really to just drop one bit randomly.  (A
computer full of Kill Gates?)

Maybe random killing isn't the trick, maybe random permuting is?  Then
killing is simply not using an output.  With a primitive in the random
transposition gate.

Enough insanity..  There might be something hidden here, but it is
really dependent on cheap connections.  It probably needs a 3D
structure to be implemented.  Random transpositions don't seem so
difficult..  It's basically a beam splitter.