Muutke küpsiste eelistusi

Transmitting and Gaining Data: Rudolf Ahlswedes Lectures on Information Theory 2 Softcover reprint of the original 1st ed. 2015 [Pehme köide]

  • Formaat: Paperback / softback, 461 pages, kõrgus x laius: 235x155 mm, kaal: 7197 g, 4 Illustrations, black and white; XVII, 461 p. 4 illus., 1 Paperback / softback
  • Sari: Foundations in Signal Processing, Communications and Networking 11
  • Ilmumisaeg: 22-Sep-2016
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319356569
  • ISBN-13: 9783319356563
Teised raamatud teemal:
  • Formaat: Paperback / softback, 461 pages, kõrgus x laius: 235x155 mm, kaal: 7197 g, 4 Illustrations, black and white; XVII, 461 p. 4 illus., 1 Paperback / softback
  • Sari: Foundations in Signal Processing, Communications and Networking 11
  • Ilmumisaeg: 22-Sep-2016
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319356569
  • ISBN-13: 9783319356563
Teised raamatud teemal:

The calculation of channel capacities was one of Rudolf Ahlswede's specialties and is the main topic of this second volume of his Lectures on Information Theory. Here we find a detailed account of some very classical material from the early days of Information Theory, including developments from the USA, Russia, Hungary and (which Ahlswede was probably in a unique position to describe) the German school centered around his supervisor Konrad Jacobs. These lectures made an approach to a rigorous justification of the foundations of Information Theory. This is the second of several volumes documenting Rudolf Ahlswede's lectures on Information Theory. Each volume includes comments from an invited well-known expert. In the supplement to the present volume, Gerhard Kramer contributes his insights.

Classical information processing concerns the main tasks of gaining knowledge and the storage, transmission and hiding of data. The first task is the prime goal of Statistics. For transmission and hiding data, Shannon developed an impressive mathematical theory called Information Theory, which he based on probabilistic models. The theory largely involves the concept of codes with small error probabilities in spite of noise in the transmission, which is modeled by channels. The lectures presented in this work are suitable for graduate students in Mathematics, and also for those working in Theoretical Computer Science, Physics, and Electrical Engineering with a background in basic Mathematics. The lectures can be used as the basis for courses or to supplement courses in many ways. Ph.D. students will also find research problems, often with conjectures, that offer potential subjects for a thesis. More advanced researchers may find questions which form the basis of entire research programs.

Part I Transmitting Data
1 Special Channels
3(166)
1.1 Lecture on the Weak Capacity of Averaged Channels
3(15)
1.1.1 Introduction
3(1)
1.1.2 Definitions
4(1)
1.1.3 A Channel Without Strong Capacity
5(4)
1.1.4 The Weak Capacity of an Averaged Discrete Channel
9(2)
1.1.5 The Weak Capacity of an Averaged Semi-continuous Channel
11(1)
1.1.6 Nonstationary Averaged Channels
12(2)
1.1.7 Averages of Channels with Respect to General Probability Distributions
14(4)
1.2 Lecture on Further Results for Averaged Channels Including C(λ)-Capacity, Side Information, Effect of Memory, and a Related, Not Familiar Optimistic Channel
18(19)
1.2.1 Averaged Channels Where Either the Sender or the Receiver Knows the Individual Channel Which Governs the Transmission
19(15)
1.2.2 Another Channel: The Optimistic Channel
34(3)
1.3 Lecture on The Structure of Capacity Functions for Compound Channels
37(24)
1.3.1 Definitions and Introduction of the Capacity Functions C(λ), C(λR), C(λR)
37(3)
1.3.2 Auxiliary Results
40(3)
1.3.3 The Structure of C(λ)
43(8)
1.3.4 The Relationship of C(λR), C(λR), and C(λ)
51(4)
1.3.5 Evaluation of C(λ) in Several Examples
55(6)
1.4 Lecture on Algebraic Compositions (Rings and Lattices) of Channels
61(13)
1.4.1 A Ring of Channels
62(3)
1.4.2 Min Channels Known as Compound Channels CC
65(1)
1.4.3 Compound Channels Treated with Maximal Coding
66(3)
1.4.4 For Comparison: The Method of Random Codes
69(2)
1.4.5 The New Channel AC
71(2)
1.4.6 The AC with Partial Knowledge of the Sender
73(1)
1.5 Lecture on Discrete Finite State Channels
74(26)
1.5.1 Definitions and Examples
74(2)
1.5.2 Two Performance Criteria: Lower and Upper Maximal Information Rates for the FSC
76(1)
1.5.3 Two Further Instructive Examples of Channels
77(1)
1.5.4 Independently Selected States
78(4)
1.5.5 The Finite State Channel with State Calculable by both Sender and Receiver
82(1)
1.5.6 Stochastically Varying Channels
83(2)
1.5.7 Random Codes and Weakly Varying Channels
85(9)
1.5.8 Side Information
94(1)
1.5.9 Sources with Arbitrarily Varying Letter Probabilities
95(3)
1.5.10 Channels with Varying Transmission Probabilities
98(2)
1.6 Lecture on Gallager's Converse for General Channels Including DFMC and FSC
100(18)
1.6.1 Lower Bounding Error Probabilities of Digits
100(4)
1.6.2 Application to FSC
104(2)
1.6.3 Indecomposable Channels
106(4)
1.6.4 A Lower Bound on the Probability of Decoding Error for the FSC
110(8)
1.7 Lecture on Information and Control: Matching Channels
118(51)
1.7.1 New Concepts and Results
118(5)
1.7.2 Definitions, Known Facts, and Abbreviations
123(2)
1.7.3 The Deterministic Matching Channel and Matching in Products of Bipartite Graphs
125(3)
1.7.4 Main Results on Matching in Products of Bipartite Graphs
128(4)
1.7.5 Matching in Products of Non-identical Bipartite Graphs
132(2)
1.7.6 An Exact Formula for the Matching Number of Powers of "Stared" Bipartite Graphs
134(2)
1.7.7 Two Examples Illustrating the Significance of Theorems 45 and 46
136(1)
1.7.8 Multi-way Deterministic Matching Channels
137(8)
1.7.9 The Controller Falls Asleep---on Matching Zero-Error Detection Codes
145(2)
1.7.10 The Matching Zero-Error Detection Capacity Cmde in a Genuine Example
147(1)
1.7.11 Feedback and also Randomization Increase the Capacity of the Matching Channel
148(1)
1.7.12 The Capacity for Matching Zero-Error Detection Codes with Feedback (MDCF) for W0
149(2)
1.7.13 Identification for Matching Channels
151(5)
1.7.14 Zero-Error Detection
156(4)
1.7.15 A Digression: Three Further Code Concepts
160(5)
References
165(4)
2 Algorithms for Computing Channel Capacities and Rate-Distortion Functions
169(20)
2.1 Lecture on Arimoto's Algorithm for Computing the Capacity of a DMC
169(9)
2.1.1 Mutual Information and Equivocation
170(1)
2.1.2 The Algorithm and Its Convergence
171(2)
2.1.3 The Convergence of the Algorithm
173(2)
2.1.4 Speed of Convergence
175(2)
2.1.5 Upper and Lower Bounds on the Capacity
177(1)
2.2 Lecture on Blahut's Algorithm for Computing Rate-Distortion Functions
178(4)
2.2.1 Basic Definitions and the Algorithm
178(2)
2.2.2 Convergence of the Algorithm
180(2)
2.3 Lecture on a Unified Treatment of the Computation of the Δ-distortion of a DMS and the Capacity of a DMS under Input Cost Constraint
182(7)
2.3.1 Preliminaries
182(1)
2.3.2 The Computation of G(s)
183(2)
2.3.3 Capacity Computing Algorithm
185(2)
References
187(2)
3 Shannon's Model for Continuous Transmission
189(80)
3.1 Lecture on Historical Remarks
189(4)
3.2 Lecture on Fundamental Theorems of Information Theory
193(18)
3.2.1 Stationary Sources
193(3)
3.2.2 Methods to Construct Sources
196(1)
3.2.3 The Ergodic Theorem in Hilbert Space
197(4)
3.2.4 The Theorem of McMillan
201(9)
3.2.5 Ergodic Sources
210(1)
3.3 Lecture on Stationary Channels
211(6)
3.3.1 A Concept of Channels
211(3)
3.3.2 Methods for the Construction of Channels
214(1)
3.3.3 Ergodic Channels
215(2)
3.4 Lecture on "Informational Capacity" and "Error Capacity" for Ergodic Channels
217(5)
3.4.1 Definition of the "Informational Capacity"
217(3)
3.4.2 The Coding Theorem
220(2)
3.5 Lecture on Another Definition of Capacity
222(7)
3.6 Lecture on Still Another Type of Operational Capacities---Risk as Performance Criterium
229(2)
3.6.1 Introduction
229(1)
3.6.2 Nedoma's Fundamental Paper Has Three Basic Ideas
230(1)
3.6.3 Report on the Results
230(1)
3.6.4 Discussion
231(1)
3.7 Lecture on the Discrete Finite-Memory Channel According to Feinstein and Wolfowitz
231(4)
3.7.1 Discussion
233(2)
3.8 Lecture on the Extension of the Shannon/McMillan Theorem by Jacobs
235(2)
3.9 Lecture on Achieving Channel Capacity in DFMC
237(2)
3.9.1 Introduction
237(1)
3.9.2 A Topology on the Linear Space £ = £P(Ω, B) of σ-Additive Finite Real-Valued Functions h: B → R
237(1)
3.9.3 R(P) Is an Upper Semi-continuous Functional (U.S.C.) for DFMC
238(1)
3.10 Lecture on the Structure of the Entropy Rate of a Discrete, Stationary Source with Finite Alphabet (DSS)
239(4)
3.11 Lecture on the Transmission of Bernoulli Sources Over Stationary Channels
243(5)
3.11.1 Sources
243(1)
3.11.2 Channels
244(1)
3.11.3 The Source Channel Hook-up
244(4)
3.12 Lecture on Block Coding for Weakly Continuous Channels
248(4)
3.12.1 Introduction
248(1)
3.12.2 Coding Theorems
249(2)
3.12.3 An Example
251(1)
3.12.4 Every Channel Is Almost Weakly Continuous
252(1)
3.13 Lecture on Zero-Error Transmission Over a DMC Requires Infinite Expected Coding Time
252(5)
3.13.1 Introduction
252(1)
3.13.2 A Class of Practical Codes
252(1)
3.13.3 Zero-Error Transmission of a Source Over a Channel
253(1)
3.13.4 Structure of Codes in Zero-Error Hook-up
254(1)
3.13.5 An Application to the Theory of Isomorphism of Stationary Processes
255(2)
3.14 Lecture on Zero-Error Stationary Coding Over Stationary Channels
257(4)
3.14.1 Capacities of Gray, Ornstein, and Dobrushin and of Kieffer: Introduction
257(1)
3.14.2 Principal Results
258(1)
3.14.3 Synchronization Words
259(2)
3.15 Lecture on Blackwell's Infinite Codes for the DMC
261(8)
3.15.1 Introduction and Result
261(1)
3.15.2 A More Mathematical Formulation
261(1)
3.15.3 A Technical Auxiliary Result
262(2)
References
264(5)
4 On Sliding-Block Codes
269(30)
4.1 Lecture on Sliding-Block Joint Source/Noisy Channel Coding Theorems
269(5)
4.1.1 Introduction
269(1)
4.1.2 Sliding-Block Coding for a DMC
270(2)
4.1.3 Sketch of the Proof of the Direct Part---the Basic Ideas
272(2)
4.2 Lecture on Block Synchronization Coding and Applications
274(13)
4.2.1 Introduction
274(3)
4.2.2 Sources, Channels, and Codes
277(2)
4.2.3 Statement and Discussion of Results
279(8)
4.3 Lecture on Sliding-Block Coding
287(12)
4.3.1 Introduction
287(1)
4.3.2 Block Coding
288(3)
4.3.3 Some Remarks on Periodic Sources
291(3)
4.3.4 Sliding-Block Coding for Weakly Continuous Channels
294(2)
References
296(3)
5 On λ-Capacities and Information Stability
299(24)
5.1 Lecture on a Report on Work of Parthasarathy and of Kieffer on A.-Capacities
299(2)
5.1.1 Historical Remarks and Preliminaries
299(2)
5.2 Asymptotic Properties of the Function N(n, λ)
301(1)
5.3 Channels with Additive Noise
301(2)
5.4 A General Formula for the Capacity of Stationary Nonanticipatory Channels
303(8)
5.4.1 The Shannon--McMillan Theorem
305(6)
5.5 Lecture on Information Density or Per Letter Information
311(3)
5.6 Lecture on Information Stability
314(1)
5.7 Lecture on Transmission of Messages over Channels
315(2)
5.7.1 Basic Concepts
315(2)
5.8 The Results
317(6)
References
320(3)
6 Channels with Infinite Alphabets
323(42)
6.1 Lecture on Introduction
323(1)
6.2 Lecture on Basic Concepts and Observations
324(3)
6.2.1 Channels and Codes
324(1)
6.2.2 Finite Length Codes and Compactness
325(2)
6.2.3 Convexity and Continuity Properties of N(K, λ)
327(1)
6.3 Infinite λ-codes for Product Channels
327(3)
6.4 Bounds on N(P, λ) in Terms of Stochastic Inequalities
330(5)
6.5 Lecture on Coding Theorem, Weak and Strong Converses for Stationary Channels
335(4)
6.6 Sharper Estimates for Channels with Moment Conditions
339(6)
6.7 A Necessary and Sufficient Condition for the Strong Converse in the Stationary Case
345(6)
6.8 Lecture on Memoryless Stationary Channels with Constrained Inputs
351(14)
6.8.1 Channels with Additive Noise and Additive Gaussian Noise
353(2)
6.8.2 Varying Channels in the Semi-Continuous Case
355(2)
6.8.3 Average Error Versus Maximal Error---Two Theories of Coding
357(1)
6.8.4 The Contraction Channel K
358(1)
6.8.5 An Upper Bound for the Maximal Error Capacity Region of K
359(2)
References
361(4)
Part II Gaining Data
7 Selected Topics of Information Theory and Mathematical Statistics
365(28)
7.1 Lecture on Hypotheses Testing, Bayes Estimates, and Fisher Information
365(7)
7.1.1 Hypotheses Testing
365(1)
7.1.2 Bayes Estimates
366(3)
7.1.3 Construction of Almost Surely Reliable Decision Functions
369(1)
7.1.4 Fisher Information
369(3)
7.2 Lecture on Information-Theoretical Approach to Expert Systems
372(21)
7.2.1 Basic Properties of the Relative Entropy
372(6)
7.2.2 Minimizing the Relative Entropy Under Linear Constraints
378(5)
7.2.3 Axiomatic Results on Inference for Inverse Problems
383(7)
References
390(3)
8 β-Biased Estimators in Data Compression
393(56)
8.1 Lecture on Introduction
393(2)
8.1.1 Notation
395(1)
8.2 Lecture on Models of Data Compression
395(4)
8.2.1 Source Coding
395(1)
8.2.2 Data Compression
396(3)
8.3 Lecture on Estimators
399(3)
8.3.1 Estimators and Data Compression
399(1)
8.3.2 Criteria for Estimators
400(2)
8.4 Lecture on a Suitable Loss Function and the Class of β-Biased Estimators
402(6)
8.4.1 Derivation of the Loss Function
402(1)
8.4.2 Discussion of an Alternative Loss Function
403(1)
8.4.3 Class of β-Biased Estimators
404(1)
8.4.4 Application of β-Biased Estimators to Different Criteria
404(4)
8.5 Lecture on Derivation of β-Biased Estimators
408(9)
8.5.1 Laplace's Estimator
408(1)
8.5.2 Bayes' Rule
409(1)
8.5.3 Generalization of Bayes' Rule to Continuous Parameters and n Observations
410(1)
8.5.4 Bayesian Estimators
411(1)
8.5.5 Application of a Bayesian Estimator to a Binary Source
412(1)
8.5.6 Admissibility of β-Biased Estimators
413(4)
8.6 Lecture on Approximations for the Redundancy
417(3)
8.7 Lecture on Outlook
420(2)
8.7.1 Preview of the Main Theorems
420(1)
8.7.2 Sketch of the Proofs
421(1)
8.8 Lecture on the Mathematical Tools
422(16)
8.8.1 Gilbert's Corollary
436(2)
8.9 Lecture on Main Results
438(5)
8.10 Lecture on Distributions
443(6)
8.10.1 The Poisson Distribution
443(1)
8.10.2 The Beta Distribution
443(1)
8.10.3 The Dirichlet Distribution
444(2)
8.10.4 The Central Moments
446(1)
8.10.5 Minimum of σP
447(1)
References
448(1)
About the Author 449(4)
Comments 453(2)
Gerhard Kramer
Notations 455(2)
Author Index 457(2)
Subject Index 459
Rudolf Ahlswede (1938 - 2010) studied Mathematics in Göttingen, and held postdoc positions in Erlangen, Germany and Ohio, USA. From 1977 on he was full Professor of Applied Mathematics at the University of Bielefeld. His work represents an essential contribution to information theory and networking. He developed and contributed to a number of central areas, including network coding, and theory of identification, while also advancing the fields of combinatorics and number theory. These efforts culminated in his research program Development of a General Theory of Information Transfer. In recognition of his work, Rudolf Ahlswede received several awards for Best Paper, as well as the distinguished Shannon Award.