Muutke küpsiste eelistusi

E-raamat: Digital Communications 1: Source and Channel Codin g: Source and Channel Coding [Wiley Online]

  • Formaat: 400 pages
  • Ilmumisaeg: 06-Oct-2015
  • Kirjastus: ISTE Ltd and John Wiley & Sons Inc
  • ISBN-10: 1119232422
  • ISBN-13: 9781119232421
  • Wiley Online
  • Hind: 174,45 €*
  • * hind, mis tagab piiramatu üheaegsete kasutajate arvuga ligipääsu piiramatuks ajaks
  • Formaat: 400 pages
  • Ilmumisaeg: 06-Oct-2015
  • Kirjastus: ISTE Ltd and John Wiley & Sons Inc
  • ISBN-10: 1119232422
  • ISBN-13: 9781119232421

The communication chain is constituted by a source and a recipient, separated by a transmission channel which may represent a portion of cable, an optical fiber, a radio channel, or a satellite link. Whatever the channel, the processing blocks implemented in the communication chain have the same foundation. This book aims to itemize.

In this first volume, after having presented the base of the information theory, we will study the source coding techniques with and without loss. Then we analyze the correcting codes for block errors, convutional and concatenated used in current systems.

Preface xiii
List of Acronyms xv
Notations xix
Introduction xxiii
Chapter 1 Introduction to Information Theory 1(56)
1.1 Introduction
1(1)
1.2 Review of probabilities
2(5)
1.2.1 Discrete random variables
2(2)
1.2.2 Continuous random variables
4(1)
1.2.3 Jensen's inequality
5(1)
1.2.4 Random signals
5(2)
1.3 Entropy and mutual information
7(13)
1.3.1 Comments on the concept of information
7(1)
1.3.2 A logarithmic measure of information
7(3)
1.3.3 Mutual information
10(1)
1.3.4 Entropy and average mutual information
11(4)
1.3.5 Kullback—Leibler divergence
15(1)
1.3.6 Differential entropy
15(2)
1.3.7 Typical and jointly typical sequences
17(3)
1.4 Lossless source coding theorems
20(10)
1.4.1 Introduction
20(1)
1.4.2 Entropy and source redundancy
20(2)
1.4.3 Fundamental theorem of source coding
22(1)
1.4.4 Lossless source coding
23(7)
1.5 Theorem for lossy source coding
30(4)
1.5.1 Introduction
30(1)
1.5.2 Definitions
30(1)
1.5.3 Lossly source coding theorem
31(3)
1.6 Transmission channel models
34(6)
1.6.1 Binary symmetric channel
34(2)
1.6.2 Discrete channels without memory
36(2)
1.6.3 Binary erasure channel
38(1)
1.6.4 Additive white Gaussian noise channel
38(2)
1.7 Capacity of a transmission channel
40(14)
1.7.1 Introduction
40(1)
1.7.2 Capacity of a transmission channel
40(2)
1.7.3 Fundamental theorem of channel coding
42(1)
1.7.4 Capacity of the binary symmetric channel
43(2)
1.7.5 Capacity of erasure channel
45(1)
1.7.6 Capacity of additive white Gaussian noise channel
46(3)
1.7.7 Graphical representation
49(5)
1.8 Exercises
54(3)
1.8.1 Exercise 1: calculation of the entropy for a discrete channel
54(1)
1.8.2 Exercise 2: computation of the mutual information
55(1)
1.8.3 Exercise 3: capacity of the additive white Gaussian noise channel
55(1)
1.8.4 Exercise 4: binary symmetric channel
55(1)
1.8.5 Exercise 5: Z channel
56(1)
1.8.6 Exercise 6: discrete channel
56(1)
Chapter 2 Source Coding 57(64)
2.1 Introduction
57(1)
2.2 Algorithms for lossless source coding
58(11)
2.2.1 Run length coding
58(1)
2.2.2 Huffman's algorithm
58(2)
2.2.3 Evaluation of the entropy of a written text
60(3)
2.2.4 Arithmetic coding
63(2)
2.2.5 Algorithm LZ78
65(2)
2.2.6 Lempel—Ziv—Welch (LZW) algorithm
67(2)
2.3 Sampling and quantization
69(18)
2.3.1 Review of the sampling theorem
69(1)
2.3.2 Scalar quantization
70(8)
2.3.3 Optimal scalar quantization
78(3)
2.3.4 Vector quantization
81(5)
2.3.5 Optimal vector quantization
86(1)
2.4 Coding techniques for analog sources with memory
87(14)
2.4.1 Introduction
87(1)
2.4.2 Linear prediction
88(3)
2.4.3 Predictive scalar quantization
91(5)
2.4.4 Transform coding
96(3)
2.4.5 Subband coding
99(2)
2.5 Application to the image and sound compression
101(15)
2.5.1 Application to the still image compression
101(1)
2.5.2 Application to speech coding
102(7)
2.5.3 Application to audio compression
109(7)
2.6 Exercises
116(5)
2.6.1 Exercise 1: entropy and Huffman's coding
116(1)
2.6.2 Exercise 2: entropy and Huffman's coding for a source with memory
116(1)
2.6.3 Exercise 3: LZ78 encoding and decoding
116(1)
2.6.4 Exercise 4: scalar quantization and Huffman's coding
117(1)
2.6.5 Exercise 5: scalar quantization and distortion
117(1)
2.6.6 Exercise 6: DPCM coding and decoding
118(3)
Chapter 3 Linear Block Codes 121(108)
3.1 Introduction
121(1)
3.2 Finite fields
122(5)
3.2.1 Fields
122(1)
3.2.2 Finite fields
123(1)
3.2.3 Irreducible and primitive polynomials
123(2)
3.2.4 Finite field with 2m elements
125(2)
3.3 Linear block codes
127(25)
3.3.1 Introduction
127(3)
3.3.2 Minimum distance of a code
130(1)
3.3.3 Parity check matrix
131(1)
3.3.4 Weight enumerator functions
132(2)
3.3.5 Error correction and error detection capabilities of linear block codes
134(2)
3.3.6 Bounds on the minimum distance of the linear block codes
136(2)
3.3.7 Bounds on the rate in the non-asymptotic regime
138(4)
3.3.8 Main block codes
142(4)
3.3.9 Graphical representations of binary linear block codes
146(6)
3.4 Decoding of binary linear block codes
152(20)
3.4.1 Introduction
152(2)
3.4.2 Optimal decoding
154(3)
3.4.3 Hard input decoding of binary linear block codes
157(6)
3.4.4 Soft input decoding of binary linear block codes
163(3)
3.4.5 Erasure decoding of binary linear block codes
166(6)
3.5 Performances of linear block codes
172(10)
3.5.1 Performances of linear block codes with hard input decoding
172(1)
3.5.2 Union bound
173(2)
3.5.3 Performances of linear block codes with soft input decoding
175(4)
3.5.4 Coding gain
179(3)
3.5.5 Performance comparison of hard and soft input decoders
182(1)
3.6 Cyclic codes
182(32)
3.6.1 Definition and properties
182(3)
3.6.2 Properties of the cyclic codes
185(4)
3.6.3 Error detection using CRC and automatic repeat request
189(3)
3.6.4 Encoding of cyclic codes
192(6)
3.6.5 Decoding of cyclic codes
198(4)
3.6.6 BCH codes
202(11)
3.6.7 Reed—Solomon codes
213(1)
3.7 Applications
214(2)
3.8 Exercises
216(13)
3.8.1 Exercise 1: repetition code
216(1)
3.8.2 Exercise 2: error probability on a binary symmetric channel
217(1)
3.8.3 Exercise 3: re-emission strategy
217(1)
3.8.4 Exercise 4: simplex code (7,3)
218(1)
3.8.5 Exercise 5: linear block code (6,3)
218(1)
3.8.6 Exercise 6: Hamming code (7,4)
219(1)
3.8.7 Exercise 7: Reed—Muller code (8,4)
219(1)
3.8.8 Exercise 8: hard trellis based decoding of block code
220(1)
3.8.9 Exercise 9: soft trellis based decoding of block code
221(1)
3.8.10 Exercise 10: soft input decoding
222(1)
3.8.11 Exercise 11: comparison of hard and soft decoding
223(1)
3.8.12 Exercise 12: Hamming code
224(1)
3.8.13 Exercise 13: cyclic code
224(1)
3.8.14 Exercise 14: Hamming code
225(1)
3.8.15 Exercise 15: performance of Reed—Solomon code (255,291)
226(1)
3.8.16 Exercise 16: CRC
226(1)
3.8.17 Exercise 17: Reed—Solomon code
227(2)
Chapter 4 Convolutional Codes 229(24)
4.1 Introduction
229(1)
4.2 Mathematical representations and hardware structures
229(9)
4.2.1 Definitions
229(6)
4.2.2 Matrix representation of convolutional codes
235(3)
4.3 Graphical representation of the convolutional codes
238(4)
4.3.1 State transition diagram
238(1)
4.3.2 Trellis diagram
239(2)
4.3.3 TWL graphs
241(1)
4.4 Free distance and transfer function of convolutional codes
242(4)
4.5 Viterbi's algorithm for the decoding of convolutional codes
246(3)
4.5.1 Complexity of the Viterbi decoder
248(1)
4.6 Punctured convolutional codes
249(1)
4.7 Applications
249(1)
4.8 Exercises
250(3)
4.8.1 Exercise 1: systematic recursive convolutional code
250(1)
4.8.2 Exercise 2: non-recursive code and Viterbi decoding
251(2)
Chapter 5 Concatenated Codes and Iterative Decoding 253(86)
5.1 Introduction
253(1)
5.2 Soft input soft output decoding
254(34)
5.2.1 Introduction
254(6)
5.2.2 Sum-product algorithm
260(9)
5.2.3 Forward-backward algorithm
269(19)
5.3 LDPC codes
288(18)
5.3.1 Definition
288(5)
5.3.2 LDPC encoding
293(2)
5.3.3 Iterative decoding of the LDPC codes
295(11)
5.3.4 Applications
306(1)
5.4 Parallel concatenated convolutional codes or turbo codes
306(24)
5.4.1 Introduction
306(1)
5.4.2 Structure of the encoder
307(4)
5.4.3 Performance study of turbo codes
311(4)
5.4.4 Iterative decoding of turbo codes
315(4)
5.4.5 EXIT charts
319(4)
5.4.6 Criteria and interleaver design
323(5)
5.4.7 Applications
328(2)
5.5 Other classes of concatenated codes
330(5)
5.5.1 Parallel concatenated block codes
330(3)
5.5.2 Serial concatenated convolutional codes
333(1)
5.5.3 Repeat-accumulate codes
333(1)
5.5.4 Product codes
333(2)
5.6 Exercises
335(4)
5.6.1 Exercise 1: soft input soft output decoding of a two-state convolutional code
335(1)
5.6.2 Exercise 2: soft input soft output decoding of a (4,3) linear block code
336(3)
Appendices 339(6)
Appendix A
341(2)
Appendix B
343(2)
Bibliography 345(8)
Index 353(2)
Summary of Volume 2 355
Didier Le Ruyet received his Eng. Degree and his Ph. D. degree from Conservatoire National des Arts et Métiers (CNAM) in 1994 and 2001 respectively.  In 2009, he received the Habilitation à diriger des recherches from Paris XIII University. He is full professor at CNAM since 2010 and deputy director of the center for research in Computer Science and Telecommunications (CEDRIC). He has published about 100 papers in refereed journals and conference proceedings. His main research and teaching activities lie in the areas of digital communication, wireless communication and signal processing including channel coding and multi-antenna transmission.

Mylène Pischella received her engineering degree and her phD in electronics and telecommunications from Télécom ParisTech. She is an associate professor at  Conservatoire National des Arts et Métiers (CNAM), where she is responsible of courses on digital communications, wireless communications and information theory.