In follow up to the post about why Bits Gotta Alternate, let’s talk about a common way of forcing them to do so: Manchester coding.
What It Does
Manchester coding is a very simple line coding technique that solves some of the problems from the previous post:
- It is DC-balanced – that is, for every 1 bit there is a matching 0 bit. The average DC level is 0.
- It guarantees a maximum bit run of two bits. That is, you can never have three 1’s or 0’s in a row in the transmitted data.
Manchester coding does not solve the issue of power spectral density – that is, the transmitted data can still causes spikes in the frequency domain, instead of spreading data evenly across the channel’s frequency bandwidth.
How It Does It
Implementing Manchester coding in hardware is extremely easy: XOR the data with its clock. It’s basically free.
When you XOR data with the clock (which, recall, has an up cycle and a down cycle for each bit), you end up with twice the output bits. Every 1 from the input becomes ‘10’ on the output, and every 0 from the input becomes ‘01’ on the output.
Example: 11100010 -> 1010100101011001
Since one bit transition on the output is guaranteed for every bit on the input, the longest possible run of bits is 2. The signal is also DC-balanced (since there is a 0/1 pair for each bit).
This is easy enough to do in software, too. You can do it on a bit-by-bit basis, or on a per-byte basis with a small look-up table.
There’s one extremely obvious disadvantage to Manchester coding: your data size doubles. A single transmitted symbol (a 0 or 1 in the air) only represents half of a bit. Manchester coding solves a lot of problems caused by transmitting arbitrary data via radio, but it effectively halves your maximum throughput.