Â鶹´«Ã½AV

Generalized Minimum Distance Decoding with Arbitrary Error/Erasure Tradeoff
Ph.D. Dissertation, Ulm University, December 2011
Abstract

In their most basic form, concatenated codes consist of an outer and an inner code. The task of the outer code is to protect payload data, while the inner code protects the symbols of the outer code. This allows one to construct codes that are tailored for a given communication channel. One example for outer codes is the famous class of Reed--Solomon (RS) codes. It is well-studied and allows for ultra-fast implementations of both encoder and decoder. However, RS codes are based on large non-binary code symbols. This renders them useful for channels that produce error bursts in the order of the symbol size. Using concatenation with an inner code that is effective for non-bursty binary errors allows one to apply RS codes even for channels that are not their natural habitat. Within the concatenated code, the inner code is in charge of correcting non-bursty channel errors. The outer code deals with erroneous decoding results of the inner code, which, from its perspective, are long error bursts.

Long concatenated codes can be decoded by using algorithms for their comparatively short inner and outer codes. This is a measurable advantage, since the complexity of most decoders grows at least linearly with the code length. The most striking property of concatenation is that this comes at no charge, i.e., long concatenated codes can correct the same number of errors as long non-concatenated codes with the same distance. This was proven in 1966 by Forney, together with an actual decoding algorithm that executes the outer decoder $Z_0$ times. This algorithm is called Generalized Minimum Distance (GMD) decoding, and $Z_0$ is fixed by the properties of the outer code.

In this dissertation, we investigate two main questions: What can we achieve if we allow execution of the outer decoder only $Z<Z_0$ times and to which extent can we compensate the expected loss of error-correcting capabilities by using advanced outer decoders? A decoder can be rated by its error/erasure tradeoff $\lambda$, a figure that specifies the penalty of knowing neither location nor value of an error in contrast to knowing its location but not its value. Until 1997, only decoders with $\lambda=2$ were known for RS codes. Since then and sparked by the discovery of the Sudan algorithm, the coding theory community has invented a plethora of algorithms with improved tradeoff $\lambda\in(1,2)$. We investigate such advanced decoders and generalize to $\lambda\in(1,2]$ results of Forney and other authors that are restricted to $\lambda=2$. To the best of our knowledge, we are the first to present this generalization.

In the first part of the thesis, we introduce basic algebraic codes within the framework of code concatenation. We list classical decoding algorithms with $\lambda=2$ as well as the most important advanced ones with $\lambda\in(1,2)$. In order to allow for a general description of GMD decoding for all considered algorithms, we introduce a means of expressing their error-correcting capabilities by a unified function, the Generalized Decoding Radius. The second part considers the decoding radius of GMD decoding, i.e., the maximum number of errors that are correctable with guarantee. We derive two optimal variants of GMD decoding for arbitrary $\lambda\in(1,2]$ and any number $Z\leq Z_0$ of outer decoding trials and analyze their properties. Depending on the properties of outer and inner code, we can always state which variant is superior over the other. Our attention is shifted towards the probability of a decoding success in the third part. We show how this probability can be maximized for any $\lambda\in(1,2]$ and any $Z\leq Z_0$ and express it analytically.