DocumentCode
1478680
Title
Good error-correcting codes based on very sparse matrices
Author
MacKay, David J C
Author_Institution
Cavendish Lab., Cambridge Univ., UK
Volume
45
Issue
2
fYear
1999
fDate
3/1/1999 12:00:00 AM
Firstpage
399
Lastpage
431
Abstract
We study two families of error-correcting codes defined in terms of very sparse matrices. “MN” (MacKay-Neal (1995)) codes are recently invented, and “Gallager codes” were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sum-product algorithm. We prove that these codes are “very good”, in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise. We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes
Keywords
Gaussian channels; channel coding; decoding; error correction codes; noise; sparse matrices; Gallager codes; Gaussian channels; MacKay-Neal codes; Shannon limit; binary-symmetric channel; channel coding; code sequences; concatenated codes; convolutional codes; decoding; error-correcting codes; information rates; optimally decoded codes; performance; sum-product algorithm; symmetric stationary ergodic noise; turbo codes; very sparse matrices; Code standards; Concatenated codes; Convolutional codes; Decoding; Error correction codes; Gaussian channels; Information rates; Sparse matrices; Sum product algorithm; Turbo codes;
fLanguage
English
Journal_Title
Information Theory, IEEE Transactions on
Publisher
ieee
ISSN
0018-9448
Type
jour
DOI
10.1109/18.748992
Filename
748992
Link To Document