Muutke küpsiste eelistusi

E-raamat: Error Correction Coding: Mathematical Methods and Algorithms

  • Formaat: EPUB+DRM
  • Ilmumisaeg: 15-Dec-2020
  • Kirjastus: John Wiley & Sons Inc
  • Keel: eng
  • ISBN-13: 9781119567493
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 148,14 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 15-Dec-2020
  • Kirjastus: John Wiley & Sons Inc
  • Keel: eng
  • ISBN-13: 9781119567493
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

"The book provides a comprehensive introduction to error correction, covering both classical and modern material, including both block and stream (convolutional) codes. The book presents the needed mathematical material in a "just in time" approach. Thatis, when a topic is presented (such as group theory), this is immediately used in applications. Topics new to this edition include shortened codes, an inversionless BCH (Bose--Chaudhuri--Hocquenghem) decoding algorithm, approximate message passing, stochastic algorithms, bit-interleaved coded modulation, and more. Also covers classical BCH and Reed Solomon, Golay, Reed Muller, or Hamming codes these codes are still component codes in virtually every modern communication system and explores recently developed polar codes and fountain codes which are gaining importance."--

Providing in-depth treatment of error correction 

Error Correction Coding: Mathematical Methods and Algorithms, 2nd Edition provides a comprehensive introduction to classical and modern methods of error correction. The presentation provides a clear, practical introduction to using a lab-oriented approach.  Readers are encouraged to implement the encoding and decoding algorithms with explicit algorithm statements and the mathematics used in error correction, balanced with an algorithmic development on how to actually do the encoding and decoding. Both block and stream (convolutional) codes are discussed, and the mathematics required to understand them are introduced on a “just-in-time” basis as the reader progresses through the book. 

The second edition increases the impact and reach of the book, updating it to discuss recent important technological advances. New material includes: 

  • Extensive coverage of LDPC codes, including a variety of decoding algorithms. 
  • A comprehensive introduction to polar codes, including systematic encoding/decoding and list decoding.   
  • An introduction to fountain codes. 
  • Modern applications to systems such as HDTV, DVBT2, and cell phones 

Error Correction Coding includes extensive program files (for example, C++ code for all LDPC decoders and polar code decoders), laboratory materials for students to implement algorithms, and an updated solutions manual, all of which are perfect to help the reader understand and retain the content.  

The book covers classical BCH, Reed Solomon, Golay, Reed Muller, Hamming, and convolutional codes which are still component codes in virtually every modern communication system. There are also fulsome discussions of recently developed polar codes and fountain codes that serve to educate the reader on the newest developments in error correction. 

Preface xvii
List of Program Files
xxiii
List of Laboratory Exercises
xxix
List of Algorithms
xxxi
List of Figures
xxxiii
List of Tables
xli
List of Boxes
xliii
About the Companion Website xlv
Part I Introduction and Foundations
1(68)
1 A Context For Error Correction Coding
3(66)
1.1 Purpose of This Book
3(1)
1.2 Introduction: Where Are Codes?
3(1)
1.3 The Communications System
4(6)
1.4 Basic Digital Communications
10(4)
1.4.1 Binary Phase-Shift Keying
10(1)
1.4.2 More General Digital Modulation
11(3)
1.5 Signal Detection
14(10)
1.5.1 The Gaussian Channel
14(2)
1.5.2 MAP and ML Detection
16(2)
1.5.3 Special Case: Binary Detection
18(1)
1.5.4 Probability of Error for Binary Detection
19(2)
1.5.5 Bounds on Performance: The Union Bound
21(2)
1.5.6 The Binary Symmetric Channel
23(1)
1.5.7 The BSC and the Gaussian Channel Model
24(1)
1.6 Memoryless Channels
24(1)
1.7 Simulation and Energy Considerations for Coded Signals
25(2)
1.8 Some Important Definitions and a Trivial Code: Repetition Coding
27(5)
1.8.1 Detection of Repetition Codes Over a BSC
27(4)
1.8.2 Soft-Decision Decoding of Repetition Codes Over the AWGN
31(1)
1.8.3 Simulation of Results
32(1)
1.8.4 Summary
32(1)
1.9 Hamming Codes
32(6)
1.9.1 Hard-Input Decoding Hamming Codes
34(2)
1.9.2 Other Representations of the Hamming Code
36(2)
1.10 The Basic Questions
38(1)
1.11 Historical Milestones of Coding Theory
39(1)
1.12 A Bit of Information Theory
39(21)
1.12.1 Information - Theoretic Definitions for Discrete Random Variables
39(4)
1.12.2 Data Processing Inequality
43(1)
1.12.3 Channels
44(2)
1.12.4 Channel Capacity
46(1)
1.12.5 Information Theoretic Definitions for Continuous Random Variables
47(1)
1.12.6 The Channel Coding Theorem
48(1)
1.12.7 "Proof of the Channel Coding Theorem
49(4)
1.12.8 Capacity for the Continuous-Time AWGN Channel
53(1)
1.12.9 Transmission at Capacity with Errors
54(1)
1.12.10 The Implication of the Channel Coding Theorem
55(1)
1.12.11 Non-Asymptotic Information Theory
56(4)
Lab 1 Simulating a Communications Channel
60(4)
Objective
60(1)
Background
60(1)
Use of Coding in Conjunction with the BSC
61(1)
Assignment
61(1)
Programming Part
61(1)
Resources and Implementation Suggestions
62(2)
1.13 Exercises
64(4)
1.14 References
68(1)
Part II Block Codes
69(384)
2 Groups And Vector Spaces
71(22)
2.1 Introduction
71(1)
2.2 Groups
71(11)
2.2.1 Subgroups
74(1)
2.2.2 Cyclic Groups and the Order of an Element
75(1)
2.2.3 Cosets
76(1)
2.2.4 Lagrange's Theorem
77(1)
2.2.5 Induced Operations; Isomorphism
78(3)
2.2.6 Homomorphism
81(1)
2.3 Fields: A Prelude
82(2)
2.4 Review of Linear Algebra
84(5)
2.5 Exercises
89(2)
2.6 References
91(2)
3 Linear Block Codes
93(30)
3.1 Basic Definitions
93(1)
3.2 The Generator Matrix Description of Linear Block Codes
94(2)
3.2.1 Rudimentary Implementation
96(1)
3.3 The Parity Check Matrix and Dual Codes
96(3)
3.3.1 Some Simple Bounds on Block Codes
98(1)
3.4 Error Detection and Correction Over Hard-Output Channels
99(5)
3.4.1 Error Detection
100(1)
3.4.2 Error Correction: The Standard Array
100(4)
3.5 Weight Distributions of Codes and Their Duals
104(2)
3.6 Binary Hamming Codes and Their Duals
106(2)
3.7 Performance of Linear Codes
108(5)
3.7.1 Error Detection Performance
108(1)
3.7.2 Error Correction Performance
109(3)
3.7.3 Performance for Soft-Decision Decoding
112(1)
3.8 Erasure Decoding
113(1)
3.8.1 Binary Erasure Decoding
114(1)
3.9 Modifications to Linear Codes
114(2)
3.10 Best-Known Linear Block Codes
116(1)
3.11 Exercises
116(5)
3.12 References
121(2)
4 Cyclic Codes, Rings, And Polynomials
123(56)
4.1 Introduction
123(1)
4.2 Basic Definitions
123(1)
4.3 Rings
124(2)
4.3.1 Rings of Polynomials
125(1)
4.4 Quotient Rings
126(2)
4.5 Ideals in Rings
128(2)
4.6 Algebraic Description of Cyclic Codes
130(1)
4.7 Nonsystematic Encoding and Parity Check
131(3)
4.8 Systematic Encoding
134(2)
4.9 Some Hardware Background
136(7)
4.9.1 Computational Building Blocks
136(1)
4.9.2 Sequences and Power Series
137(1)
4.9.3 Polynomial Multiplication
137(1)
4.9.4 Polynomial Division
138(3)
4.9.5 Simultaneous Polynomial Division and Multiplication
141(2)
4.10 Cyclic Encoding
143(4)
4.11 Syndrome Decoding
147(6)
4.12 Shortened Cyclic Codes
153(2)
4.12.1 Method 1: Simulating the Extra Clock Shifts
154(1)
4.12.2 Method 2: Changing the Error Pattern Detection Circuit
155(1)
4.13 Binary CRC Codes
155(6)
4.13.1 Byte-Oriented Encoding and Decoding Algorithms
157(4)
4.13.2 CRC Protecting Data Files or Data Packets
161(1)
Appendix 4.A Linear Feedback Shift Registers
161(7)
Appendix 4.A.1 Basic Concepts
162(3)
Appendix 4.A.2 Connection With Polynomial Division
165(2)
Appendix 4.A.3 Some Algebraic Properties of Shift Sequences
167(1)
Lab 2 Polynomial Division and Linear Feedback Shift Registers
168(2)
Objective
168(1)
Preliminary Exercises
168(1)
Programming Part: BinLFSR
168(1)
Resources and Implementation Suggestions
169(1)
Programming Part: BinPolyDiv
169(1)
Follow-On Ideas and Problems
170(1)
Lab 3 CRC Encoding and Decoding
170(2)
Objective
170(1)
Preliminary
170(1)
Programming Part
170(1)
Resources and Implementation Suggestions
171(1)
4.14 Exercise
172(6)
4.15 References
178(1)
5 Rudiments Of Number Theory And Algebra
179(62)
5.1 Motivation
179(3)
5.2 Number Theoretic Preliminaries
182(14)
5.2.1 Divisibility
182(2)
5.2.2 The Euclidean Algorithm and Euclidean Domains
184(5)
5.2.3 An Application of the Euclidean Algorithm: The Sugiyama Algorithm
189(3)
5.2.4 Congruence
192(1)
5.2.5 The φ Function
193(1)
5.2.6 Some Cryptographic Payoff
193(3)
5.3 The Chinese Remainder Theorem
196(4)
5.3.1 The CRT and Interpolation
198(2)
5.4 Fields
200(7)
5.4.1 An Examination of R and C
201(3)
5.4.2 Galois Field Construction: An Example
204(3)
5.4.3 Connection with Linear Feedback Shift Registers
207(1)
5.5 Galois Fields: Mathematical Facts
207(4)
5.6 Implementing Galois Field Arithmetic
211(2)
5.6.1 Zech Logarithms
211(1)
5.6.2 Hardware Implementations
212(1)
5.7 Subfields of Galois Fields
213(1)
5.8 Irreducible and Primitive Polynomials
214(2)
5.9 Conjugate Elements and Minimal Polynomials
216(6)
5.9.1 Minimal Polynomials
218(4)
5.10 Factoring xn -- 1
222(1)
5.11 Cyclotomic Cosets
223(1)
Appendix 5.A How Many Irreducible Polynomials Are There?
224(4)
Appendix 5.A.1 Solving for Im Explicitly: The Moebius Function
227(1)
Lab 4 Programming the Euclidean Algorithm
228(1)
Objective
228(1)
Preliminary Exercises
228(1)
Background
228(1)
Programming Part
229(1)
Lab 5 Programming Galois Field Arithmetic
229(2)
Objective
229(1)
Preliminary Exercises
229(1)
Programming Part
230(1)
5.12 Exercise
231(8)
5.13 References
239(2)
6 Bch And Reed-Solomon Codes: Designer Cyclic Codes
241(58)
6.1 BCH Codes
241(8)
6.1.1 Designing BCH Codes
241(3)
6.1.2 The BCH Bound
244(2)
6.1.3 Weight Distributions for Some Binary BCH Codes
246(2)
6.1.4 Asymptotic Results for BCH Codes
248(1)
6.2 Reed-Solomon Codes
249(4)
6.2.1 Reed-Solomon Construction 1
249(1)
6.2.2 Reed-Solomon Construction 2
250(1)
6.2.3 Encoding Reed-Solomon Codes
251(1)
6.2.4 MDS Codes and Weight Distributions for RS Codes
252(1)
6.3 Decoding BCH and RS Codes: The General Outline
253(3)
6.3.1 Computation of the Syndrome
254(1)
6.3.2 The Error Locator Polynomial
255(1)
6.3.3 Chien Search
255(1)
6.4 Finding the Error Locator Polynomial
256(12)
6.4.1 Simplifications for Narrow-Sense Binary Codes; Peterson's Algorithm
258(3)
6.4.2 Berlekamp--Massey Algorithm
261(1)
6.4.3 Characterization of LFSR Length in Massey's Algorithm
262(4)
6.4.4 Simplifications for Binary Codes
266(2)
6.5 Nonbinary BCH and RS Decoding
268(4)
6.5.1 Forney's Algorithm
268(4)
6.6 Euclidean Algorithm for the Error Locator Polynomial
272(2)
6.7 Erasure Decoding for Nonbinary BCH or RS Codes
274(3)
6.8 Galois Field Fourier Transform Methods
277(6)
6.8.1 Equivalence of the Two Reed-Solomon Code Constructions
281(1)
6.8.2 Frequency-Domain Decoding
282(1)
6.9 Variations and Extensions of Reed-Solomon Codes
283(4)
6.9.1 Simple Modifications
283(1)
6.9.2 Generalized Reed-Solomon Codes and Alternant Codes
283(2)
6.9.3 Goppa Codes
285(1)
6.9.4 Decoding Alternant Codes
286(1)
6.9.5 Cryptographic Connections: The McEliece Public Key Cryptosystem
286(1)
Lab 6 Programming the Berlekamp-Massey Algorithm
287(2)
Background
287(1)
Assignment
287(1)
Preliminary Exercises
287(1)
Programming Part
288(1)
Resources and Implementation Suggestions
288(1)
Lab 7 Programming the BCH Decoder
289(1)
Objective
289(1)
Preliminary Exercises
289(1)
Programming Part
289(1)
Resources and Implementation Suggestions
290(1)
Follow-On Ideas and Problems
290(1)
Lab 8 Reed-Solomon Encoding and Decoding
290(1)
Objective
290(1)
Background
290(1)
Programming Part
290(1)
Appendix 6.A Proof of Newton's Identities
291(2)
6.11 Exercise
293(4)
6.12 References
297(2)
7 Alternate Decoding Algorithms For Reed--Solomon Codes
299(72)
7.1 Introduction: Workload for Reed--Solomon Decoding
299(1)
7.2 Derivations of Welch--Berlekamp Key Equation
299(7)
7.2.1 The Welch--Berlekamp Derivation of the WB Key Equation
300(4)
7.2.2 Derivation from the Conventional Key Equation
304(2)
7.3 Finding the Error Values
306(1)
7.4 Methods of Solving the WB Key Equation
307(19)
7.4.1 Background: Modules
308(1)
7.4.2 The Welch--Berlekamp Algorithm
309(6)
7.4.3 A Modular Approach to the Solution of the WB Key Equation
315(11)
7.5 Erasure Decoding with the WB Key Equation
326(1)
7.6 The Guruswami--Sudan Decoding Algorithm and Soft RS Decoding
326(41)
7.6.1 Bounded Distance, ML, and List Decoding
326(1)
7.6.2 Error Correction by Interpolation
327(1)
7.6.3 Polynomials in Two Variables
328(5)
7.6.4 The GS Decoder: The Main Theorems
333(7)
7.6.5 Algorithms for Computing the Interpolation Step
340(11)
7.6.6 A Special Case: M = 1 and L = 1
351(2)
7.6.7 An Algorithm for the Factorization Step: The Roth--Ruckenstein Algorithm
353(5)
7.6.8 Soft-Decision Decoding of Reed-Solomon Codes
358(9)
7.7 Exercises
367(3)
7.8 References
370(1)
8 Other Important Block Codes
371(36)
8.1 Introduction
371(1)
8.2 Hadamard Matrices, Codes, and Transforms
371(5)
8.2.1 Introduction to Hadamard Matrices
371(2)
8.2.2 The Paley Construction of Hadamard Matrices
373(3)
8.2.3 Hadamard Codes
376(1)
8.3 Reed--Muller Codes
376(17)
8.3.1 Boolean Functions
377(1)
8.3.2 Definition of the Reed--Muller Codes
378(2)
8.3.3 Encoding and Decoding Algorithms for First-Order RM Codes
380(5)
8.3.4 The Reed Decoding Algorithm for RM(r, m) Codes, r ≥ 1
385(7)
8.3.5 Other Constructions of Reed-Muller Codes
392(1)
8.4 Building Long Codes from Short Codes: The Squaring Construction
393(4)
8.5 Quadratic Residue Codes
397(2)
8.6 Golay Codes
399(4)
8.6.1 Decoding the Golay Code
400(3)
8.7 Exercises
403(2)
8.8 References
405(2)
9 Bounds On Codes
407(18)
9.1 The Gilbert-Varshamov Bound
410(1)
9.2 The Plotkin Bound
411(1)
9.3 The Griesmer Bound
412(1)
9.4 The Linear Programming and Related Bounds
413(6)
9.4.1 Krawtchouk Polynomials
415(1)
9.4.2 Character
415(2)
9.4.3 Krawtchouk Polynomials and Characters
417(2)
9.5 The McEliece-Rodemich-Rumsey-Welch Bound
419(1)
9.6 Exercises
420(4)
9.7 References
424(1)
10 Bursty Channels, Interleaves, And Concatenation
425(14)
10.1 Introduction to Bursty Channels
425(1)
10.2 Interleaves
425(3)
10.3 An Application of Interleaved RS Codes: Compact Discs
428(1)
10.4 Product Codes
429(2)
10.5 Reed--Solomon Codes
431(1)
10.6 Concatenated Codes
431(1)
10.7 Fire Codes
432(4)
10.7.1 Fire Code Definition
432(2)
10.7.2 Decoding Fire Codes: Error Trapping Decoding
434(2)
10.8 Exercises
436(1)
10.9 References
437(2)
11 Soft-Decision Decoding Algorithms
439(14)
11.1 Introduction and General Notation
439(2)
11.2 Generalized Minimum Distance Decoding
441(3)
11.2.1 Distance Measures and Properties
441(3)
11.3 The Chase Decoding Algorithms
444(1)
11.4 Halting the Search: An Optimality Condition
445(2)
11.5 Ordered Statistic Decoding
447(2)
11.6 Soft Decoding Using the Dual Code: The Hartmann Rudolph Algorithm
449(2)
11.7 Exercises
451(1)
11.8 References
452(1)
Part III Codes on Graphs
453(136)
12 Convolutional Codes
455(90)
12.1 Introduction and Basic Notation
455(5)
12.1.1 The State
459(1)
12.2 Definition of Codes and Equivalent Codes
460(11)
12.2.1 Catastrophic Encoders
464(2)
12.2.2 Polynomial and Rational Encoders
466(1)
12.2.3 Constraint Length and Minimal Encoders
467(3)
12.2.4 Systematic Encoders
470(1)
12.3 Decoding Convolutional Codes
471(17)
12.3.1 Introduction and Notation
471(3)
12.3.2 The Viterbi Algorithm
474(9)
12.3.3 Some Implementation Issues
483(5)
12.4 Some Performance Results
488(3)
12.5 Error Analysis for Convolutional Codes
491(14)
12.5.1 Enumerating Paths Through the Trellis
494(5)
12.5.2 Characterizing Node Error Probability Pe and Bit Error Rate Pb
499(2)
12.5.3 A Bound on Pd for Discrete Channels
501(2)
12.5.4 A Bound on Pd for BPSK Signaling Over the AWGN Channel
503(2)
12.5.5 Asymptotic Coding Gain
505(1)
12.6 Tables of Good Codes
505(1)
12.7 Puncturing
506(3)
12.7.1 Puncturing to Achieve Variable Rate
508(1)
12.8 Suboptimal Decoding Algorithms for Convolutional Codes
509(15)
12.8.1 Tree Representations
510(1)
12.8.2 The Fano Metric
510(4)
12.8.3 The Stack Algorithm
514(2)
12.8.4 The Fano Algorithm
516(7)
12.8.5 Other Issues for Sequential Decoding
523(1)
12.8.6 A Variation on the Viterbi Algorithm: The M Algorithm
524(1)
12.9 Convolutional Codes as Block Codes and Tailbiting Codes
524(1)
12.10 A Modified Expression for the Path Metric
525(3)
12.11 Soft Output Viterbi Algorithm (SOVA)
528(5)
12.12 Trellis Representations of Block and Cyclic Codes
533(2)
12.12.1 Block Codes
533(1)
12.12.2 Cyclic Codes
533(1)
12.12.3 Trellis Decoding of Block Codes
534(1)
Lab 9 Programming Convolutional Encoders
535(2)
Objective
535(1)
Background
535(1)
Programming Part
536(1)
Lab 10 Convolutional Decoders: The Viterbi Algorithm
537(1)
Objective
537(1)
Background
537(1)
Programming Part
538(1)
12.13 Exercises
538(5)
12.14 References
543(2)
13 Trellis-Coded Modulation
545(44)
13.1 Adding Redundancy by Adding Signals
545(1)
13.2 Background on Signal Constellations
545(2)
13.3 TCM Example
547(8)
13.3.1 The General Ungerboeck Coding Framework
553(1)
13.3.2 The Set Partitioning Idea
553(2)
13.4 Some Error Analysis for TCM Codes
555(7)
13.4.1 General Considerations
555(1)
13.4.2 A Description of the Error Events
556(4)
13.4.3 Known Good TCM Codes
560(2)
13.5 Decoding TCM Codes
562(1)
13.6 Rotational Invariance
562(6)
13.6.1 Differential Encoding
566(1)
13.6.2 Constellation Labels and Partitions
567(1)
13.7 Multidimensional TCM
568(10)
13.7.1 Some Advantages of Multidimensional TCM
569(1)
13.7.2 Lattices and Sublattices
570(8)
13.8 Multidimensional TCM Example: The V.34 Modem Standard
578(7)
13.9 Exercises
585(1)
Lab 11 Trellis-Coded Modulation Encoding and Decoding
586(1)
Objective
586(1)
Background
586(1)
Programming Part
586(1)
13.10 References
586(3)
Part IV Iteratively Decoded Codes
589(188)
14 Turbo Codes
591(46)
14.1 Introduction
591(2)
14.2 Encoding Parallel Concatenated Codes
593(2)
14.3 Turbo Decoding Algorithms
595(21)
14.3.1 The MAP Decoding Algorithm
596(1)
14.3.2 Notation
596(2)
14.3.3 Posterior Probability
598(2)
14.3.4 Computing α, and βi
600(1)
14.3.5 Computing γi
601(1)
14.3.6 Normalization
602(1)
14.3.7 Summary of the BCJR Algorithm
603(2)
14.3.8 A Matrix/Vector Formulation
605(1)
14.3.9 Comparison of the Viterbi Algorithm and the BCJR Algorithm
605(1)
14.3.10 The BCJR Algorithm for Systematic Codes
606(1)
14.3.11 Turbo Decoding Using the BCJR Algorithm
607(2)
14.3.12 Likelihood Ratio Decoding
609(3)
14.3.13 Statement of the Turbo Decoding Algorithm
612(1)
14.3.14 Turbo Decoding Stopping Criteria
612(2)
14.3.15 Modifications of the MAP Algorithm
614(2)
14.3.16 Corrections to the Max-Log-MAP Algorithm
616(1)
14.3.17 The Soft-Output Viterbi Algorithm
616(1)
14.4 On the Error Floor and Weight Distributions
616(6)
14.4.1 The Error Floor
616(2)
14.4.2 Spectral Thinning and Random Permuters
618(3)
14.4.3 On Permuters
621(1)
14.5 EXIT Chart Analysis
622(4)
14.5.1 The EXIT Chart
624(2)
14.6 Block Turbo Coding
626(2)
14.7 Turbo Equalization
628(3)
14.7.1 Introduction to Turbo Equalization
628(1)
14.7.2 The Framework for Turbo Equalization
629(2)
Lab 12 Turbo Code Decoding
631(1)
Objective
631(1)
Background
631(1)
Programming Part
631(1)
14.8 Exercise
631(3)
14.9 References
634(3)
15 Low-Density Parity-Check Codes: Introduction, Decoding, And Analysis
637(80)
15.1 Introduction
637(1)
15.2 LDPC Codes: Construction and Notation
637(4)
15.3 Tanner Graphs
641(1)
15.4 Decoding LDPC Codes
642(52)
15.4.1 Decoding Using Log-Likelihood Ratios
642(10)
15.4.2 Decoding Using Probabilities
652(7)
15.4.3 Variations on Decoding Algorithms: The Min-Sum Decoder
659(3)
15.4.4 Variations on Decoding Algorithms: Min-Sum with Correction
662(7)
15.4.5 Hard-Decision Decoding
669(5)
15.4.6 Divide and Concur Decoding
674(7)
15.4.7 Difference Map Belief Propagation Decoding
681(1)
15.4.8 Linear Programming Decoding
682(9)
15.4.9 Decoding on the Binary Erasure Channel
691(1)
15.4.10 BEC Channels and Stopping Sets
692(2)
15.5 Why Low-Density Parity-Check Codes?
694(1)
15.6 The Iterative Decoder on General Block Codes
695(1)
15.7 Density Evolution
696(3)
15.8 EXIT Charts for LDPC Codes
699(3)
15.9 Irregular LDPC Codes
702(6)
15.9.1 Degree Distribution Pairs
703(1)
15.9.2 Density Evolution for Irregular Codes
704(2)
15.9.3 Computation and Optimization of Density Evolution
706(2)
15.9.4 Using Irregular Codes
708(1)
15.10 More on LDPC Code Construction
708(1)
15.11 Encoding LDPC Codes
708(2)
15.12 A Variation: Low-Density Generator Matrix Codes
710(1)
Lab 13 Programming an LDPC Decoder
710(2)
Objective
710(1)
Background
710(1)
Assignment
711(1)
Numerical Considerations
711(1)
15.13 Exercise
712(3)
15.14 References
715(2)
16 Low-Density Parity-Check Codes: Designs And Variations
717(60)
16.1 Introduction
717(1)
16.2 Repeat-Accumulate Codes
717(6)
16.2.1 Decoding RA Codes
720(1)
16.2.2 Irregular RA Codes
721(1)
16.2.3 RA Codes with Multiple Accumulators
722(1)
16.3 LDPC Convolutional Codes
723(2)
16.4 Quasi-Cyclic Codes
725(5)
16.4.1 QC Generator Matrices
725(2)
16.4.2 Constructing QC Generator Matrices from QC Parity Check Matrices
727(3)
16.5 Construction of LDPC Codes Based on Finite Fields
730(6)
16.5.1 I. Construction of QC-LDPC Codes Based on the Minimum-Weight Codewords of a Reed-Solomon Code with Two Information Symbols
731(2)
16.5.2 II. Construction of QC-LDPC Codes Based on a Special Subclass of RS Codes
733(1)
16.5.3 III. Construction of QC-LDPC Codes Based on Subgroups of a Finite Field
734(1)
16.5.4 IV. Construction of QC-LDPC Codes Based on Subgroups of the Multiplicative Group of a Finite Field
735(1)
16.5.5 Construction of QC-LDPC Codes Based on Primitive Elements of a Field
736(1)
16.6 Code Design Based on Finite Geometries
736(6)
16.6.1 Rudiments of Euclidean Geometry
736(4)
16.6.2 A Family of Cyclic EG-LDPC Codes
740(1)
16.6.3 Construction of LDPC Codes Based on Parallel Bundles of Lines
741(1)
16.6.4 Constructions Based on Other Combinatoric Objects
742(1)
16.7 Ensembles of LDPC Codes
742(4)
16.7.1 Regular ensembles
743(1)
16.7.2 Irregular Ensembles
743(2)
16.7.3 Multi-edge-type Ensembles
745(1)
16.8 Constructing LDPC Codes by Progressive Edge Growth (PEG)
746(3)
16.9 Protograph and Multi-Edge-Type LDPC Codes
749(1)
16.10 Error Floors and Trapping Sets
750(1)
16.11 Importance Sampling
751(20)
16.11.1 Importance Sampling: General Principles
752(2)
16.11.2 Importance Sampling for Estimating Error Probability
754(1)
Conventional Sampling (MC)
755(1)
Importance Sampling (IS)
756(9)
16.11.3 Importance Sampling for Tanner Trees
765(6)
16.12 Fountain Codes
771(3)
16.12.1 Conventional Erasure Correction Codes
772(1)
16.12.2 Tornado Codes
772(2)
16.12.3 Luby Transform Codes
774(1)
16.12.4 Raptor Codes
774(1)
16.13 References
774(3)
Part V Polar Codes
777(108)
17 Polar Codes
779(106)
17.1 Introduction and Preview
779(2)
17.2 Notation and Channels
781(1)
17.3 Channel Polarization, N = 2 Channel
782(11)
17.3.1 Encoding
782(1)
17.3.2 Synthetic Channels and Mutual Information
783(3)
17.3.3 Synthetic Channel Transition Probabilities
786(1)
17.3.4 An Example with N = 2 Using the Binary Erasure Channel
786(1)
17.3.5 Successive Cancellation Decoding
787(5)
17.3.6 Tree Representation
792(1)
17.3.7 The Polar Coding Idea
793(1)
17.4 Channel Polarization, N > 2 Channels
793(33)
17.4.1 Channel Combining: Extension from N / 2 to N channels
793(4)
17.4.2 Pseudocode for Encoder for Arbitrary N
797(1)
17.4.3 Transition Probabilities and Channel Splitting
798(4)
17.4.4 Channel Polarization Demonstrated: An Example Using the Binary Erasure Channel for N > 2
802(3)
17.4.5 Polar Coding
805(2)
17.4.6 Tree Representation
807(1)
17.4.7 Successive Cancellation Decoding
807(3)
17.4.8 SC Decoding from a Message Passing Point of View on the Tree
810(1)
17.4.9 A Decoding Example with N-A
811(2)
17.4.10 A Decoding Example with N =S
813(10)
17.4.11 Pseudo-Code Description of Successive Cancellation Algorithm
823(3)
17.5 Some Theorems of Polar Coding Theory
826(11)
17.5.1 I(W) and Z(W) for general B-DMCs
826(1)
17.5.2 Channel Polarization
827(3)
17.5.3 The Polarization Theorem
830(1)
17.5.4 A Few Facts About Martingales
830(1)
17.5.5 Proof of the Polarization Theorem
831(1)
17.5.6 Another Polarization Theorem
832(3)
17.5.7 Rate of Polarization
835(1)
17.5.8 Probability of Error Performance
835(2)
17.6 Designing Polar Codes
837(2)
17.6.1 Code Design by Battacharyya Bound
837(1)
17.6.2 Monte Carlo Estimation of Z(W(i)N)
838(1)
17.7 Perspective: The Channel Coding Theorem
839(1)
17.8 Systematic Encoding of Polar Codes
839(11)
17.8.1 Polar Systematic Encoding via the Encoder Graph
840(2)
17.8.2 Polar Systematic Encoding via Ankan's Method
842(1)
17.8.3 Systematic Encoding: The Bit Reverse Permutation
843(1)
17.8.4 Decoding Systematically Encoded Codes
843(1)
17.8.5 Flexible Systematic Encoding
843(3)
17.8.6 Involutions and Domination Contiguity
846(1)
17.8.7 Polar Codes and Domination Contiguity
847(3)
17.8.8 Modifications for Polar Codes with Bit-Reverse Permutation
850(1)
17.9 List Decoding of Polar Codes
850(16)
17.9.1 The Likelihood Data Structure P
851(2)
17.9.2 Normalization
853(1)
17.9.3 Code to Compute P
854(1)
17.9.4 The Bits Data Structure C
855(1)
17.9.5 Code to Compute C
856(1)
17.9.6 Supporting Data Structures
857(1)
17.9.7 Code for Polar List Decoding
857(5)
17.9.8 An Example of List Decoding
862(2)
17.9.9 Computational Complexity
864(1)
17.9.10 Modified Polar Codes
865(1)
17.10 LLR-Based Successive Cancellation List Decoding
866(1)
17.10.1 Implementation Considerations
867(1)
17.11 Simplified Successive Cancellation Decoding
867(4)
17.11.1 Modified SC Decoding
870(1)
17.12 Relations with Reed-Muller Codes
871(1)
17.13 Hardware and Throughput Performance Results
871(1)
17.14 Generalizations, Extensions, and Variations
871(1)
Appendix 17.A BN is a Bit-Reverse Permutation
872(1)
Appendix 17.B The Relationship of the Battacharyya Parameter to Channel Capacity
873(7)
Appendix 17.B.1 Error Probability for Two Codewords
873(1)
Appendix 17.B.2 Proof of Inequality (17.59)
874(5)
Appendix 17.B.3 Proof of Inequality (17.60) [ 16]
879(1)
Appendix 17.C Proof of Theorem 17.12
880(2)
17.15 Exercises
882(1)
17.16 References
883(2)
Part VI Applications
885(14)
18 Some Applications Of Error Correction In Modern Communication Systems
887(12)
18.1 Introduction
887(1)
18.2 Digital Video Broadcast T2 (DVB-T2)
887(5)
18.2.1 BCH Outer Encoding
887(1)
18.2.2 LDPC Inner Encoding
888(4)
18.3 Digital Cable Television
892(3)
18.4 E-UTRA and Long-Term Evolution
895(3)
18.4.1 LTE Rate 1/3 Convolutional Encoder
897(1)
18.4.2 LTE Turbo Code
897(1)
18.5 References
898(1)
Part VII Space-Time Coding
899(26)
19 Fading Channels And Space-Time Codes
901(24)
19.1 Introduction
901(1)
19.2 Fading Channels
901(4)
19.2.1 Rayleigh Fading
904(1)
19.3 Diversity Transmission and Reception: The MIMO Channel
905(4)
19.3.1 The Narrowband MIMO Channel
907(1)
19.3.2 Diversity Performance with Maximal-Ratio Combining
908(1)
19.4 Space-Time Block Codes
909(8)
19.4.1 The Alamouti Code
909(2)
19.4.2 A More General Formulation
911(1)
19.4.3 Performance Calculation
911(5)
19.4.4 Complex Orthogonal Designs
916(1)
19.5 Space-Time Trellis Codes
917(2)
19.5.1 Concatenation
919(1)
19.6 How Many Antennas?
919(2)
19.7 Estimating Channel Information
921(1)
19.8 Exercises
922(1)
19.9 References
923(2)
References 925(14)
Index 939
TODD K. MOON is a Professor in the Electrical and Computer Engineering Department at Utah State University. His research interests include information systems (communications, signal processing, and controls) and the application of principles of mathematics to problems involving the transmission, extraction, modeling, compression, or analysis of signals. He is the author of numerous journal and conference articles and graduate level texts on signal processing and error correction coding.