Statistics
411
Views
140
Downloads
0
Donations
Uploader

高宏飞

Shared on 2025-08-13
Support
Share

AuthorAndre Neubauer, Jurgen Freudenberger, Volker Kuhn

One of the most important key technologies for digital communication systems as well as storage media is coding theory. It provides a means to transmit information across time and space over noisy and unreliable communication channels. Coding Theory: Algorithms, Architectures and Applications provides a concise overview of channel coding theory and practice, as well as the accompanying signal processing architectures. The book is unique in presenting algorithms, architectures, and applications of coding theory in a unified framework. It covers the basics of coding theory before moving on to discuss algebraic linear block and cyclic codes, turbo codes and low density parity check codes and space-time codes. Coding Theory provides algorithms and architectures used for implementing coding and decoding strategies as well as coding schemes used in practice especially in communication systems. Feature of the book include: Unique presentation-like style for summarising main aspects Practical issues for implementation of coding techniques Sound theoretical approach to practical, relevant coding methodologies Covers standard coding schemes such as block and convolutional codes, coding schemes such as Turbo and LDPC codes, and space time codes currently in research, all covered in a common framework with respect to their applications. This book is ideal for postgraduate and undergraduate students of communication and information engineering, as well as computer science students. It will also be of use to engineers working in the industry who want to know more about the theoretical basics of coding theory and their application in currently relevant communication systems

Tags
No tags
ISBN: 0470519827
Publisher: Wiley-Interscience
Publish Year: 2007
Language: 中文
Pages: 305
File Format: PDF
File Size: 1.6 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

(This page has no text content)
(This page has no text content)
(This page has no text content)
Coding Theory
(This page has no text content)
Coding Theory Algorithms, Architectures, and Applications André Neubauer Münster University of Applied Sciences, Germany Jürgen Freudenberger HTWG Konstanz, University of Applied Sciences, Germany Volker Kühn University of Rostock, Germany
Copyright  2007 John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777 Email (for orders and customer service enquiries): cs-books@wiley.co.uk Visit our Home Page on www.wileyeurope.com or www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to permreq@wiley.co.uk, or faxed to (+44) 1243 770620. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The Publisher is not associated with any product or vendor mentioned in this book. All trademarks referred to in the text of this publication are the property of their respective owners. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 6045 Freemont Blvd, Mississauga, Ontario, L5R 4J3, Canada Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Anniversary Logo Design: Richard J. Pacifico Library of Congress Cataloging-in-Publication Data Neubauer, Andre. Coding theory : algorithms, architectures and applications / Andre Neubauer, Jürgen Freudenberger, Volker Kühn. p. cm. ISBN 978-0-470-02861-2 (cloth) 1. Coding theory. I Freudenberger, Jrgen. II. Kühn, Volker. III. Title QA268.N48 2007 003′ .54–dc22 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 978-0-470-02861-2 (HB) Typeset in 10/12pt Times by Laserwords Private Limited, Chennai, India Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.
Contents Preface ix 1 Introduction 1 1.1 Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 Binary Symmetric Channel . . . . . . . . . . . . . . . . . . . . . . 5 1.2.4 AWGN Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 A Simple Channel Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Algebraic Coding Theory 13 2.1 Fundamentals of Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.1 Code Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.1.2 Maximum Likelihood Decoding . . . . . . . . . . . . . . . . . . . . 19 2.1.3 Binary Symmetric Channel . . . . . . . . . . . . . . . . . . . . . . 23 2.1.4 Error Detection and Error Correction . . . . . . . . . . . . . . . . . 25 2.2 Linear Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2.1 Definition of Linear Block Codes . . . . . . . . . . . . . . . . . . . 27 2.2.2 Generator Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2.3 Parity-Check Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.4 Syndrome and Cosets . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.5 Dual Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.6 Bounds for Linear Block Codes . . . . . . . . . . . . . . . . . . . . 37 2.2.7 Code Constructions . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.2.8 Examples of Linear Block Codes . . . . . . . . . . . . . . . . . . . 46 2.3 Cyclic Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 2.3.1 Definition of Cyclic Codes . . . . . . . . . . . . . . . . . . . . . . 62 2.3.2 Generator Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.3.3 Parity-Check Polynomial . . . . . . . . . . . . . . . . . . . . . . . . 67 2.3.4 Dual Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 2.3.5 Linear Feedback Shift Registers . . . . . . . . . . . . . . . . . . . . 71 2.3.6 BCH Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 2.3.7 Reed–Solomon Codes . . . . . . . . . . . . . . . . . . . . . . . . . 81
vi CONTENTS 2.3.8 Algebraic Decoding Algorithm . . . . . . . . . . . . . . . . . . . . 84 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3 Convolutional Codes 97 3.1 Encoding of Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . 98 3.1.1 Convolutional Encoder . . . . . . . . . . . . . . . . . . . . . . . . . 98 3.1.2 Generator Matrix in the Time Domain . . . . . . . . . . . . . . . . 101 3.1.3 State Diagram of a Convolutional Encoder . . . . . . . . . . . . . . 103 3.1.4 Code Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 3.1.5 Puncturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 3.1.6 Generator Matrix in the D-Domain . . . . . . . . . . . . . . . . . . 108 3.1.7 Encoder Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.2 Trellis Diagram and the Viterbi Algorithm . . . . . . . . . . . . . . . . . . 112 3.2.1 Minimum Distance Decoding . . . . . . . . . . . . . . . . . . . . . 113 3.2.2 Trellises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 3.2.3 Viterbi Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.3 Distance Properties and Error Bounds . . . . . . . . . . . . . . . . . . . . . 121 3.3.1 Free Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 3.3.2 Active Distances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 3.3.3 Weight Enumerators for Terminated Codes . . . . . . . . . . . . . . 126 3.3.4 Path Enumerators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 3.3.5 Pairwise Error Probability . . . . . . . . . . . . . . . . . . . . . . . 131 3.3.6 Viterbi Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 3.4 Soft-input Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 3.4.1 Euclidean Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 3.4.2 Support of Punctured Codes . . . . . . . . . . . . . . . . . . . . . . 137 3.4.3 Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . 138 3.5 Soft-output Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 3.5.1 Derivation of APP Decoding . . . . . . . . . . . . . . . . . . . . . 141 3.5.2 APP Decoding in the Log Domain . . . . . . . . . . . . . . . . . . 145 3.6 Convolutional Coding in Mobile Communications . . . . . . . . . . . . . . 147 3.6.1 Coding of Speech Data . . . . . . . . . . . . . . . . . . . . . . . . 147 3.6.2 Hybrid ARQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 3.6.3 EGPRS Modulation and Coding . . . . . . . . . . . . . . . . . . . . 152 3.6.4 Retransmission Mechanism . . . . . . . . . . . . . . . . . . . . . . 155 3.6.5 Link Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 3.6.6 Incremental Redundancy . . . . . . . . . . . . . . . . . . . . . . . . 157 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 4 Turbo Codes 163 4.1 LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 4.1.1 Codes Based on Sparse Graphs . . . . . . . . . . . . . . . . . . . . 165 4.1.2 Decoding for the Binary Erasure Channel . . . . . . . . . . . . . . 168 4.1.3 Log-Likelihood Algebra . . . . . . . . . . . . . . . . . . . . . . . . 169 4.1.4 Belief Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.2 A First Encounter with Code Concatenation . . . . . . . . . . . . . . . . . 177 4.2.1 Product Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
CONTENTS vii 4.2.2 Iterative Decoding of Product Codes . . . . . . . . . . . . . . . . . 180 4.3 Concatenated Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . 182 4.3.1 Parallel Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . 182 4.3.2 The UMTS Turbo Code . . . . . . . . . . . . . . . . . . . . . . . . 183 4.3.3 Serial Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . 184 4.3.4 Partial Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . 185 4.3.5 Turbo Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 4.4 EXIT Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 4.4.1 Calculating an EXIT Chart . . . . . . . . . . . . . . . . . . . . . . 189 4.4.2 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 4.5 Weight Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 4.5.1 Partial Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 4.5.2 Expected Weight Distribution . . . . . . . . . . . . . . . . . . . . . 197 4.6 Woven Convolutional Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 198 4.6.1 Encoding Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 4.6.2 Distance Properties of Woven Codes . . . . . . . . . . . . . . . . . 202 4.6.3 Woven Turbo Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 205 4.6.4 Interleaver Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 5 Space–Time Codes 215 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.1.1 Digital Modulation Schemes . . . . . . . . . . . . . . . . . . . . . . 216 5.1.2 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 5.2 Spatial Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 5.2.1 Basic Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 5.2.2 Spatial Channel Models . . . . . . . . . . . . . . . . . . . . . . . . 234 5.2.3 Channel Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 239 5.3 Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 5.3.1 Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 5.3.2 Outage Probability and Outage Capacity . . . . . . . . . . . . . . . 250 5.3.3 Ergodic Error Probability . . . . . . . . . . . . . . . . . . . . . . . 252 5.4 Orthogonal Space–Time Block Codes . . . . . . . . . . . . . . . . . . . . . 257 5.4.1 Alamouti’s Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 5.4.2 Extension to More than Two Transmit Antennas . . . . . . . . . . . 260 5.4.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 5.5 Spatial Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 5.5.1 General Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 5.5.2 Iterative APP Preprocessing and Per-layer Decoding . . . . . . . . . 267 5.5.3 Linear Multilayer Detection . . . . . . . . . . . . . . . . . . . . . . 272 5.5.4 Original BLAST Detection . . . . . . . . . . . . . . . . . . . . . . 275 5.5.5 QL Decomposition and Interference Cancellation . . . . . . . . . . 278 5.5.6 Performance of Multi-Layer Detection Schemes . . . . . . . . . . . 287 5.5.7 Unified Description by Linear Dispersion Codes . . . . . . . . . . . 291 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
viii CONTENTS A Algebraic Structures 295 A.1 Groups, Rings and Finite Fields . . . . . . . . . . . . . . . . . . . . . . . . 295 A.1.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 A.1.2 Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 A.1.3 Finite Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 A.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 A.3 Polynomials and Extension Fields . . . . . . . . . . . . . . . . . . . . . . . 300 A.4 Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 B Linear Algebra 311 C Acronyms 319 Bibliography 325 Index 335
Preface Modern information and communication systems are based on the reliable and efficient transmission of information. Channels encountered in practical applications are usually disturbed regardless of whether they correspond to information transmission over noisy and time-variant mobile radio channels or to information transmission on optical discs that might be damaged by scratches. Owing to these disturbances, appropriate channel coding schemes have to be employed such that errors within the transmitted information can be detected or even corrected. To this end, channel coding theory provides suitable coding schemes for error detection and error correction. Besides good code characteristics with respect to the number of errors that can be detected or corrected, the complexity of the architectures used for implementing the encoding and decoding algorithms is important for practical applications. The present book provides a concise overview of channel coding theory and practice as well as the accompanying algorithms, architectures and applications. The selection of the topics presented in this book is oriented towards those subjects that are relevant for information and communication systems in use today or in the near future. The focus is on those aspects of coding theory that are important for the understanding of these systems. This book places emphasis on the algorithms for encoding and decoding and their architectures, as well as the applications of the corresponding coding schemes in a unified framework. The idea for this book originated from a two-day seminar on coding theory in the industrial context. We have tried to keep this seminar style in the book by highlighting the most important facts within the figures and by restricting the scope to the most important topics with respect to the applications of coding theory, especially within communication systems. This also means that many important and interesting topics could not be covered in order to be as concise as possible. The target audience for the book are students of communication and information engi- neering as well as computer science at universities and also applied mathematicians who are interested in a presentation that subsumes theory and practice of coding theory without sac- rificing exactness or relevance with regard to real-world practical applications. Therefore, this book is well suited for engineers in industry who want to know about the theoretical basics of coding theory and their application in currently relevant communication systems. The book is organised as follows. In Chapter 1 a brief overview of the principle architecture of a communication system is given and the information theory fundamentals underlying coding theory are summarised. The most important concepts of information the- ory, such as entropy and channel capacity as well as simple channel models, are described.
x PREFACE Chapter 2 presents the classical, i.e. algebraic, coding theory. The fundamentals of the encoding and decoding of block codes are explained, and the maximum likelihood decoding rule is derived as the optimum decoding strategy for minimising the word error probability after decoding a received word. Linear block codes and their definition based on generator and parity-check matrices are discussed. General performance measures and bounds relating important code characteristics such as the minimum Hamming distance and the code rate are presented, illustrating the compromises necessary between error detection and error correction capabilities and transmission efficiency. It is explained how new codes can be constructed from already known codes. Repetition codes, parity-check-codes, Hamming codes, simplex codes and Reed–Muller codes are presented as examples. Since the task of decoding linear block codes is difficult in general, the algebraic properties of cyclic codes are exploited for efficient decoding algorithms. These cyclic codes, together with their generator and parity-check polynomials, are discussed, as well as efficient encoding and decoding architectures based on linear feedback shift registers. Important cyclic codes such as BCH codes and Reed–Solomon codes are presented, and an efficient algebraic decoding algorithm for the decoding of these cyclic codes is derived. Chapter 3 deals with the fundamentals of convolutional coding. Convolutional codes can be found in many applications, for instance in dial-up modems, satellite communications and digital cellular systems. The major reason for this popularity is the existence of efficient decoding algorithms that can utilise soft input values from the demodulator. This so-called soft-input decoding leads to significant performance gains. Two famous examples for a soft-input decoding algorithm are the Viterbi algorithm and the Bahl, Cocke, Jelinek, Raviv (BCJR) algorithm which also provides a reliability output. Both algorithms are based on the trellis representation of the convolutional code. This highly repetitive structure makes trellis-based decoding very suitable for hardware implementations. We start our discussion with the encoding of convolutional codes and some of their basic properties. It follows a presentation of the Viterbi algorithm and an analysis of the error correction performance with this maximum likelihood decoding procedure. The concept of soft-output decoding and the BCJR algorithm are considered in Section 3.5. Soft- output decoding is a prerequisite for the iterative decoding of concatenated convolutional codes as introduced in Chapter 4. Finally, we consider an application of convolutional codes for mobile communication channels as defined in the Global System for Mobile communications (GSM) standard. In particular, the considered hybrid ARQ protocols are excellent examples of the adaptive coding systems that are required for strongly time-variant mobile channels. As mentioned above, Chapter 4 is dedicated to the construction of long powerful codes based on the concatenation of simple convolutional component codes. These concatenated convolutional codes, for example the famous turbo codes, are capable of achieving low bit error rates at signal-to-noise ratios close to the theoretical Shannon limit. The term turbo reflects a property of the employed iterative decoding algorithm, where the decoder output of one iteration is used as the decoder input of the next iteration. This concept of iterative decoding was first introduced for the class of low-density parity-check codes. Therefore, we first introduce low-density parity-check codes in Section 4.1 and discuss the relation between these codes and concatenated code constructions. Then, we introduce some popular encoding schemes for concatenated convolutional codes and present three methods to analyse the performance of the corresponding codes. The EXIT chart method
PREFACE xi in Section 4.4 makes it possible to predict the behaviour of the iterative decoder by looking at the input/output relations of the individual constituent soft-output decoders. Next, we present a common approach in coding theory. We estimate the code performance with maximum likelihood decoding for an ensemble of concatenated codes. This method explains why many concatenated code constructions lead to a low minimum Hamming distance and therefore to a relatively poor performance for high signal-to-noise ratios. In Section 4.6 we consider code designs that lead to a higher minimum Hamming distance owing to a special encoder construction, called the woven encoder, or the application of designed interleavers. The fifth chapter addresses space–time coding concepts, a still rather new topic in the area of radio communications. Although these techniques do not represent error-correcting codes in the classical sense, they can also be used to improve the reliability of a data link. Since space–time coding became popular only a decade ago, only a few concepts have found their way into current standards hitherto. However, many other approaches are currently being discussed. As already mentioned before, we restrict this book to the most important and promising concepts. While classical encoders and decoders are separated from the physical channel by modulators, equalisers, etc., and experience rather simple hyperchannels, this is not true for space–time coding schemes. They directly work on the physical channel. Therefore, Chapter 5 starts with a short survey of linear modulation schemes and explains the prin- ciple of diversity. Next, spatial channel models are described and different performance measures for their quantitative evaluation are discussed. Sections 5.4 and 5.5 introduce two space–time coding concepts with the highest practical relevance, namely orthogonal space–time block codes increasing the diversity degree and spatial multiplexing techniques boosting the achievable data rate. For the latter approach, sophisticated signal processing algorithms are required at the receiver in order to separate superimposed data streams again. In the appendices a brief summary of algebraic structures such as finite fields and poly- nomial rings is given, which are needed for the treatment especially of classical algebraic codes, and the basics of linear algebra are briefly reviewed. Finally, we would like to thank the Wiley team, especially Sarah Hinton as the respon- sible editor, for their support during the completion of the manuscript. We also thank Dr. rer. nat. Jens Schlembach who was involved from the beginning of this book project and who gave encouragement when needed. Last but not least, we would like to give spe- cial thanks to our families – Fabian, Heike, Jana and Erik, Claudia, Hannah and Jakob and Christiane – for their emotional support during the writing of the manuscript. André Neubauer Münster University of Applied Sciences Jürgen Freudenberger HTWG Konstanz University of Applied Sciences Volker Kühn University of Rostock
(This page has no text content)
1 Introduction The reliable transmission of information over noisy channels is one of the basic require- ments of digital information and communication systems. Here, transmission is understood both as transmission in space, e.g. over mobile radio channels, and as transmission in time by storing information in appropriate storage media. Because of this requirement, modern communication systems rely heavily on powerful channel coding methodologies. For practi- cal applications these coding schemes do not only need to have good coding characteristics with respect to the capability of detecting or correcting errors introduced on the channel. They also have to be efficiently implementable, e.g. in digital hardware within integrated circuits. Practical applications of channel codes include space and satellite communica- tions, data transmission, digital audio and video broadcasting and mobile communications, as well as storage systems such as computer memories or the compact disc (Costello et al., 1998). In this introductory chapter we will give a brief introduction into the field of channel coding. To this end, we will describe the information theory fundamentals of channel coding. Simple channel models will be presented that will be used throughout the text. Furthermore, we will present the binary triple repetition code as an illustrative example of a simple channel code. 1.1 Communication Systems In Figure 1.1 the basic structure of a digital communication system is shown which repre- sents the architecture of the communication systems in use today. Within the transmitter of such a communication system the following tasks are carried out: ● source encoding, ● channel encoding, ● modulation. Coding Theory – Algorithms, Architectures, and Applications André Neubauer, Jürgen Freudenberger, Volker Kühn Ò 2007 John Wiley & Sons, Ltd
2 INTRODUCTION Principal structure of digital communication systems FEC encoder FEC encoder source encoder FEC encoder FEC encoder FEC encoder FEC encoder source decoder FEC encoder channel encoder modu- lator demo- dulator channel channel decoder u û b r ■ The sequence of information symbols u is encoded into the sequence of code symbols b which are transmitted across the channel after modulation. ■ The sequence of received symbols r is decoded into the sequence of information symbols û which are estimates of the originally transmitted information symbols. Figure 1.1: Basic structure of digital communication systems In the receiver the corresponding inverse operations are implemented: ● demodulation, ● channel decoding, ● source decoding. According to Figure 1.1 the modulator generates the signal that is used to transmit the sequence of symbols b across the channel (Benedetto and Biglieri, 1999; Neubauer, 2007; Proakis, 2001). Due to the noisy nature of the channel, the transmitted signal is disturbed. The noisy received signal is demodulated by the demodulator in the receiver, leading to the sequence of received symbols r. Since the received symbol sequence r usually differs from the transmitted symbol sequence b, a channel code is used such that the receiver is able to detect or even correct errors (Bossert, 1999; Lin and Costello, 2004; Neubauer, 2006b). To this end, the channel encoder introduces redundancy into the information sequence u. This redundancy can be exploited by the channel decoder for error detection or error correction by estimating the transmitted symbol sequence û. In his fundamental work, Shannon showed that it is theoretically possible to realise an information transmission system with as small an error probability as required (Shannon, 1948). The prerequisite for this is that the information rate of the information source be smaller than the so-called channel capacity. In order to reduce the information rate, source coding schemes are used which are implemented by the source encoder in the transmitter and the source decoder in the receiver (McEliece, 2002; Neubauer, 2006a).
INTRODUCTION 3 Further information about source coding can be found elsewhere (Gibson et al., 1998; Sayood, 2000, 2003). In order better to understand the theoretical basics of information transmission as well as channel coding, we now give a brief overview of information theory as introduced by Shannon in his seminal paper (Shannon, 1948). In this context we will also introduce the simple channel models that will be used throughout the text. 1.2 Information Theory An important result of information theory is the finding that error-free transmission across a noisy channel is theoretically possible – as long as the information rate does not exceed the so-called channel capacity. In order to quantify this result, we need to measure information. Within Shannon’s information theory this is done by considering the statistics of symbols emitted by information sources. 1.2.1 Entropy Let us consider the discrete memoryless information source shown in Figure 1.2. At a given time instant, this discrete information source emits the random discrete symbol X = xi which assumes one out of M possible symbol values x1, x2, . . . , xM . The rate at which these symbol values appear are given by the probabilities PX (x1), PX (x2), . . . , PX (xM) with PX (xi) = Pr{X = xi}. Discrete information source Information source X ■ The discrete information source emits the random discrete symbol X . ■ The symbol values x1, x2, . . . , xM appear with probabilities PX (x1), PX (x2), . . . , PX (xM). ■ Entropy I (X ) = − M∑ i=1 PX (xi) · log2(PX (xi)) (1.1) Figure 1.2: Discrete information source emitting discrete symbols X
4 INTRODUCTION The average information associated with the random discrete symbol X is given by the so-called entropy measured in the unit ‘bit’ I (X ) = − M∑ i=1 PX (xi) · log2 (PX (xi)) . For a binary information source that emits the binary symbols X = 0 and X = 1 with probabilities Pr{X = 0} = p0 and Pr{X = 1} = 1− Pr{X = 0} = 1− p0, the entropy is given by the so-called Shannon function or binary entropy function I (X ) = −p0 log2(p0)− (1− p0) log2(1− p0). 1.2.2 Channel Capacity With the help of the entropy concept we can model a channel according to Berger’s channel diagram shown in Figure 1.3 (Neubauer, 2006a). Here, X refers to the input symbol and R denotes the output symbol or received symbol. We now assume that M input symbol values x1, x2, . . . , xM and N output symbol values r1, r2, . . . , rN are possible. With the help of the conditional probabilities PX |R(xi |rj ) = Pr{X = xi |R = rj } and PR|X (rj |xi) = Pr{R = rj |X = xi} the conditional entropies are given by I (X |R) = − M∑ i=1 N∑ j=1 PX ,R(xi, rj ) · log2 ( PX |R(xi |rj ) ) and I (R|X ) = − M∑ i=1 N∑ j=1 PX ,R(xi, rj ) · log2(PR|X (rj |xi)). With these conditional probabilities the mutual information I (X ;R) = I (X )− I (X |R) = I (R)− I (R|X ) can be derived which measures the amount of information that is transmitted across the channel from the input to the output for a given information source. The so-called channel capacity C is obtained by maximising the mutual information I (X ;R) with respect to the statistical properties of the input X , i.e. by appropriately choosing the probabilities {PX (xi)}1≤i≤M . This leads to C = max {PX (xi )}1≤i≤M I (X ;R). If the input entropy I (X ) is smaller than the channel capacity C I (X ) ! < C, then information can be transmitted across the noisy channel with arbitrarily small error probability. Thus, the channel capacity C in fact quantifies the information transmission capacity of the channel.
INTRODUCTION 5 Berger’s channel diagram I (X ) I (R) I (X |R) I (R|X ) I (X ;R) ■ Mutual information I (X ;R) = I (X )− I (X |R) = I (R)− I (R|X ) (1.2) ■ Channel capacity C = max {PX (xi )}1≤i≤M I (X ;R) (1.3) Figure 1.3: Berger’s channel diagram 1.2.3 Binary Symmetric Channel As an important example of a memoryless channel we turn to the binary symmetric channel or BSC. Figure 1.4 shows the channel diagram of the binary symmetric channel with bit error probability ε. This channel transmits the binary symbol X = 0 or X = 1 correctly with probability 1− ε, whereas the incorrect binary symbol R = 1 or R = 0 is emitted with probability ε. By maximising the mutual information I (X ;R), the channel capacity of a binary symmetric channel is obtained according to C = 1+ ε log2(ε)+ (1− ε) log2(1− ε). This channel capacity is equal to 1 if ε = 0 or ε = 1; for ε = 1 2 the channel capacity is 0. In contrast to the binary symmetric channel, which has discrete input and output symbols taken from binary alphabets, the so-called AWGN channel is defined on the basis of continuous real-valued random variables.1 1In Chapter 5 we will also consider complex-valued random variables.
The above is a preview of the first 20 pages. Register to read the complete e-book.