Preface |
|
xiii | |
List of Acronyms |
|
xv | |
Notations |
|
xix | |
Introduction |
|
xxiii | |
Chapter 1 Introduction to Information Theory |
|
1 | (56) |
|
|
1 | (1) |
|
1.2 Review of probabilities |
|
|
2 | (5) |
|
1.2.1 Discrete random variables |
|
|
2 | (2) |
|
1.2.2 Continuous random variables |
|
|
4 | (1) |
|
1.2.3 Jensen's inequality |
|
|
5 | (1) |
|
|
5 | (2) |
|
1.3 Entropy and mutual information |
|
|
7 | (13) |
|
1.3.1 Comments on the concept of information |
|
|
7 | (1) |
|
1.3.2 A logarithmic measure of information |
|
|
7 | (3) |
|
|
10 | (1) |
|
1.3.4 Entropy and average mutual information |
|
|
11 | (4) |
|
1.3.5 Kullback—Leibler divergence |
|
|
15 | (1) |
|
1.3.6 Differential entropy |
|
|
15 | (2) |
|
1.3.7 Typical and jointly typical sequences |
|
|
17 | (3) |
|
1.4 Lossless source coding theorems |
|
|
20 | (10) |
|
|
20 | (1) |
|
1.4.2 Entropy and source redundancy |
|
|
20 | (2) |
|
1.4.3 Fundamental theorem of source coding |
|
|
22 | (1) |
|
1.4.4 Lossless source coding |
|
|
23 | (7) |
|
1.5 Theorem for lossy source coding |
|
|
30 | (4) |
|
|
30 | (1) |
|
|
30 | (1) |
|
1.5.3 Lossly source coding theorem |
|
|
31 | (3) |
|
1.6 Transmission channel models |
|
|
34 | (6) |
|
1.6.1 Binary symmetric channel |
|
|
34 | (2) |
|
1.6.2 Discrete channels without memory |
|
|
36 | (2) |
|
1.6.3 Binary erasure channel |
|
|
38 | (1) |
|
1.6.4 Additive white Gaussian noise channel |
|
|
38 | (2) |
|
1.7 Capacity of a transmission channel |
|
|
40 | (14) |
|
|
40 | (1) |
|
1.7.2 Capacity of a transmission channel |
|
|
40 | (2) |
|
1.7.3 Fundamental theorem of channel coding |
|
|
42 | (1) |
|
1.7.4 Capacity of the binary symmetric channel |
|
|
43 | (2) |
|
1.7.5 Capacity of erasure channel |
|
|
45 | (1) |
|
1.7.6 Capacity of additive white Gaussian noise channel |
|
|
46 | (3) |
|
1.7.7 Graphical representation |
|
|
49 | (5) |
|
|
54 | (3) |
|
1.8.1 Exercise 1: calculation of the entropy for a discrete channel |
|
|
54 | (1) |
|
1.8.2 Exercise 2: computation of the mutual information |
|
|
55 | (1) |
|
1.8.3 Exercise 3: capacity of the additive white Gaussian noise channel |
|
|
55 | (1) |
|
1.8.4 Exercise 4: binary symmetric channel |
|
|
55 | (1) |
|
1.8.5 Exercise 5: Z channel |
|
|
56 | (1) |
|
1.8.6 Exercise 6: discrete channel |
|
|
56 | (1) |
Chapter 2 Source Coding |
|
57 | (64) |
|
|
57 | (1) |
|
2.2 Algorithms for lossless source coding |
|
|
58 | (11) |
|
|
58 | (1) |
|
2.2.2 Huffman's algorithm |
|
|
58 | (2) |
|
2.2.3 Evaluation of the entropy of a written text |
|
|
60 | (3) |
|
|
63 | (2) |
|
|
65 | (2) |
|
2.2.6 Lempel—Ziv—Welch (LZW) algorithm |
|
|
67 | (2) |
|
2.3 Sampling and quantization |
|
|
69 | (18) |
|
2.3.1 Review of the sampling theorem |
|
|
69 | (1) |
|
2.3.2 Scalar quantization |
|
|
70 | (8) |
|
2.3.3 Optimal scalar quantization |
|
|
78 | (3) |
|
2.3.4 Vector quantization |
|
|
81 | (5) |
|
2.3.5 Optimal vector quantization |
|
|
86 | (1) |
|
2.4 Coding techniques for analog sources with memory |
|
|
87 | (14) |
|
|
87 | (1) |
|
|
88 | (3) |
|
2.4.3 Predictive scalar quantization |
|
|
91 | (5) |
|
|
96 | (3) |
|
|
99 | (2) |
|
2.5 Application to the image and sound compression |
|
|
101 | (15) |
|
2.5.1 Application to the still image compression |
|
|
101 | (1) |
|
2.5.2 Application to speech coding |
|
|
102 | (7) |
|
2.5.3 Application to audio compression |
|
|
109 | (7) |
|
|
116 | (5) |
|
2.6.1 Exercise 1: entropy and Huffman's coding |
|
|
116 | (1) |
|
2.6.2 Exercise 2: entropy and Huffman's coding for a source with memory |
|
|
116 | (1) |
|
2.6.3 Exercise 3: LZ78 encoding and decoding |
|
|
116 | (1) |
|
2.6.4 Exercise 4: scalar quantization and Huffman's coding |
|
|
117 | (1) |
|
2.6.5 Exercise 5: scalar quantization and distortion |
|
|
117 | (1) |
|
2.6.6 Exercise 6: DPCM coding and decoding |
|
|
118 | (3) |
Chapter 3 Linear Block Codes |
|
121 | (108) |
|
|
121 | (1) |
|
|
122 | (5) |
|
|
122 | (1) |
|
|
123 | (1) |
|
3.2.3 Irreducible and primitive polynomials |
|
|
123 | (2) |
|
3.2.4 Finite field with 2m elements |
|
|
125 | (2) |
|
|
127 | (25) |
|
|
127 | (3) |
|
3.3.2 Minimum distance of a code |
|
|
130 | (1) |
|
3.3.3 Parity check matrix |
|
|
131 | (1) |
|
3.3.4 Weight enumerator functions |
|
|
132 | (2) |
|
3.3.5 Error correction and error detection capabilities of linear block codes |
|
|
134 | (2) |
|
3.3.6 Bounds on the minimum distance of the linear block codes |
|
|
136 | (2) |
|
3.3.7 Bounds on the rate in the non-asymptotic regime |
|
|
138 | (4) |
|
|
142 | (4) |
|
3.3.9 Graphical representations of binary linear block codes |
|
|
146 | (6) |
|
3.4 Decoding of binary linear block codes |
|
|
152 | (20) |
|
|
152 | (2) |
|
|
154 | (3) |
|
3.4.3 Hard input decoding of binary linear block codes |
|
|
157 | (6) |
|
3.4.4 Soft input decoding of binary linear block codes |
|
|
163 | (3) |
|
3.4.5 Erasure decoding of binary linear block codes |
|
|
166 | (6) |
|
3.5 Performances of linear block codes |
|
|
172 | (10) |
|
3.5.1 Performances of linear block codes with hard input decoding |
|
|
172 | (1) |
|
|
173 | (2) |
|
3.5.3 Performances of linear block codes with soft input decoding |
|
|
175 | (4) |
|
|
179 | (3) |
|
3.5.5 Performance comparison of hard and soft input decoders |
|
|
182 | (1) |
|
|
182 | (32) |
|
3.6.1 Definition and properties |
|
|
182 | (3) |
|
3.6.2 Properties of the cyclic codes |
|
|
185 | (4) |
|
3.6.3 Error detection using CRC and automatic repeat request |
|
|
189 | (3) |
|
3.6.4 Encoding of cyclic codes |
|
|
192 | (6) |
|
3.6.5 Decoding of cyclic codes |
|
|
198 | (4) |
|
|
202 | (11) |
|
|
213 | (1) |
|
|
214 | (2) |
|
|
216 | (13) |
|
3.8.1 Exercise 1: repetition code |
|
|
216 | (1) |
|
3.8.2 Exercise 2: error probability on a binary symmetric channel |
|
|
217 | (1) |
|
3.8.3 Exercise 3: re-emission strategy |
|
|
217 | (1) |
|
3.8.4 Exercise 4: simplex code (7,3) |
|
|
218 | (1) |
|
3.8.5 Exercise 5: linear block code (6,3) |
|
|
218 | (1) |
|
3.8.6 Exercise 6: Hamming code (7,4) |
|
|
219 | (1) |
|
3.8.7 Exercise 7: Reed—Muller code (8,4) |
|
|
219 | (1) |
|
3.8.8 Exercise 8: hard trellis based decoding of block code |
|
|
220 | (1) |
|
3.8.9 Exercise 9: soft trellis based decoding of block code |
|
|
221 | (1) |
|
3.8.10 Exercise 10: soft input decoding |
|
|
222 | (1) |
|
3.8.11 Exercise 11: comparison of hard and soft decoding |
|
|
223 | (1) |
|
3.8.12 Exercise 12: Hamming code |
|
|
224 | (1) |
|
3.8.13 Exercise 13: cyclic code |
|
|
224 | (1) |
|
3.8.14 Exercise 14: Hamming code |
|
|
225 | (1) |
|
3.8.15 Exercise 15: performance of Reed—Solomon code (255,291) |
|
|
226 | (1) |
|
|
226 | (1) |
|
3.8.17 Exercise 17: Reed—Solomon code |
|
|
227 | (2) |
Chapter 4 Convolutional Codes |
|
229 | (24) |
|
|
229 | (1) |
|
4.2 Mathematical representations and hardware structures |
|
|
229 | (9) |
|
|
229 | (6) |
|
4.2.2 Matrix representation of convolutional codes |
|
|
235 | (3) |
|
4.3 Graphical representation of the convolutional codes |
|
|
238 | (4) |
|
4.3.1 State transition diagram |
|
|
238 | (1) |
|
|
239 | (2) |
|
|
241 | (1) |
|
4.4 Free distance and transfer function of convolutional codes |
|
|
242 | (4) |
|
4.5 Viterbi's algorithm for the decoding of convolutional codes |
|
|
246 | (3) |
|
4.5.1 Complexity of the Viterbi decoder |
|
|
248 | (1) |
|
4.6 Punctured convolutional codes |
|
|
249 | (1) |
|
|
249 | (1) |
|
|
250 | (3) |
|
4.8.1 Exercise 1: systematic recursive convolutional code |
|
|
250 | (1) |
|
4.8.2 Exercise 2: non-recursive code and Viterbi decoding |
|
|
251 | (2) |
Chapter 5 Concatenated Codes and Iterative Decoding |
|
253 | (86) |
|
|
253 | (1) |
|
5.2 Soft input soft output decoding |
|
|
254 | (34) |
|
|
254 | (6) |
|
5.2.2 Sum-product algorithm |
|
|
260 | (9) |
|
5.2.3 Forward-backward algorithm |
|
|
269 | (19) |
|
|
288 | (18) |
|
|
288 | (5) |
|
|
293 | (2) |
|
5.3.3 Iterative decoding of the LDPC codes |
|
|
295 | (11) |
|
|
306 | (1) |
|
5.4 Parallel concatenated convolutional codes or turbo codes |
|
|
306 | (24) |
|
|
306 | (1) |
|
5.4.2 Structure of the encoder |
|
|
307 | (4) |
|
5.4.3 Performance study of turbo codes |
|
|
311 | (4) |
|
5.4.4 Iterative decoding of turbo codes |
|
|
315 | (4) |
|
|
319 | (4) |
|
5.4.6 Criteria and interleaver design |
|
|
323 | (5) |
|
|
328 | (2) |
|
5.5 Other classes of concatenated codes |
|
|
330 | (5) |
|
5.5.1 Parallel concatenated block codes |
|
|
330 | (3) |
|
5.5.2 Serial concatenated convolutional codes |
|
|
333 | (1) |
|
5.5.3 Repeat-accumulate codes |
|
|
333 | (1) |
|
|
333 | (2) |
|
|
335 | (4) |
|
5.6.1 Exercise 1: soft input soft output decoding of a two-state convolutional code |
|
|
335 | (1) |
|
5.6.2 Exercise 2: soft input soft output decoding of a (4,3) linear block code |
|
|
336 | (3) |
Appendices |
|
339 | (6) |
|
|
341 | (2) |
|
|
343 | (2) |
Bibliography |
|
345 | (8) |
Index |
|
353 | (2) |
Summary of Volume 2 |
|
355 | |