Untitled Document
Perfect communication with imperfect chips
Errorcorrecting codes discovered at MIT can still guarantee reliable communication, even in cellphones with failureprone lowpower chips.
Larry Hardesty, MIT News Office
August 4, 2011
One of the triumphs of the information age is the idea of errorcorrecting codes, which ensure that data carried by electromagnetic signals  traveling through the air, or through cables or optical fibers  can be reconstructed flawlessly at the receiving end, even when they’ve been corrupted by electrical interference or other sources of what engineers call “noise.”
For more than 60 years, the analysis of errorcorrecting codes has assumed that, however corrupted a signal may be, the circuits that decode it are errorfree. In the next 10 years, however, that assumption may have to change. In order to extend the battery life of portable computing devices, manufacturers may soon turn to lowpower signalprocessing circuits that are themselves susceptible to noise, meaning that errors sometimes creep into their computations.
Fortunately, in the July issue of IEEE Transactions on Information Theory, Lav Varshney PhD ‘10, a research affiliate at MIT’s Research Laboratory of Electronics, demonstrates that some of the most commonly used codes in telecommunications can still ensure faithful transmission of information, even when the decoders themselves are noisy. The same analysis, which is adapted from his MIT thesis, also demonstrates that memory chips, which present the same tradeoff between energy efficiency and reliability that signalprocessing chips do, can preserve data indefinitely even when their circuits sometimes fail.
According to the semiconductor industry’s 15year projections, both memory and computational circuits “will become smaller and lowerpower,” Varshney explains. “As you make circuits smaller and lowerpower, they’re subject to noise. So these effects are starting to come into play.”
Playing the odds
The theory of errorcorrecting codes was established by Claude Shannon  who taught at MIT for 22 years  in a groundbreaking 1948 paper. Shannon envisioned a message sent through a communications medium as a sequence of bits  0s and 1s. Noise in the channel might cause some of the bits to flip or become indeterminate. An errorcorrecting code would consist of additional bits tacked on to the message bits and containing information about them. If message bits became corrupted, the extra bits would help describe what their values were supposed to be.
The longer the errorcorrecting code, the less efficient the transmission of information, since more total bits are required for a given number of message bits. To date, the most efficient codes known are those discovered in 1960 by MIT professor emeritus Robert Gallager, which are called lowdensity paritycheck codes  or sometimes, more succinctly, Gallager codes. Those are the codes that Varshney analyzed.
The key to his new analysis, Varshney explains, was not to attempt to quantify the performance of particular codes and decoders but rather to look at the statistical properties of whole classes of them. Once he was able to show that, on average, a set of noisy decoders could guarantee faithful reconstruction of corrupted data, he was then able to identify a single decoder within that set that met the averageperformance standard.
The noisy brain
Today, updated and optimized versions of Gallager’s 1960 codes are used for error correction by many cellphone carriers. Those codes would have to be slightly modified to guarantee optimal performance with noisy circuits, but “you use essentially the same decoding methodologies that Gallager did,” Varshney says. And since the codes have to correct for errors in both transmission and decoding, they would also yield lower transmission rates (or require higherpower transmitters).
Shashi Chilappagari, an engineer at Marvell Semiconductor, which designs signalprocessing chips, says that like Gallager’s codes, the question of whether noisy circuits can correct transmission errors dates back to the 1960s, when it attracted the attention of computing pioneer John von Neumann. “This is kind of a surprising result,” Chilappagari says. “It’s not very intuitive to say that this kind of scheme can work.” Chilappagari points out, however, that like most analyses of errorcorrecting codes, Varshney’s draws conclusions by considering what happens as the length of the encoded messages approaches infinity. Chipmakers would be reluctant to adopt the coding scheme Varshney proposes, Chilappagari says, without “time to test it and see how it works on a givenlength code.”
While researching his thesis, Varshney noticed that the decoder for Gallager’s codes  which in fact passes data back and forth between several different decoders, gradually refining its reconstruction of the original message  has a similar structure to ensembles of neurons in the brain’s cortex. In ongoing work, he’s trying to determine whether his analysis of errorcorrecting codes can be adapted to characterize information processing in the brain. “It’s pretty well established that neural things are noisy  neurons fire randomly … and there’s other forms of noise as well,” Varshney says.
For more information, go to
