Muutke küpsiste eelistusi

E-raamat: Statistical Inference for Engineers and Data Scientists

(University of Illinois, Urbana-Champaign), (University of Illinois, Urbana-Champaign)
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 22-Nov-2018
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781316946688
  • Formaat - EPUB+DRM
  • Hind: 76,56 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 22-Nov-2018
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781316946688

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

An up-to-date and mathematically accessible introduction to the tools needed to address modern inference problems in engineering and data science. Richly illustrated with examples and exercises connecting the theory with practice, it is the 'go to' guide for students studying the topic, and an excellent reference for researchers and practitioners.

This book is a mathematically accessible and up-to-date introduction to the tools needed to address modern inference problems in engineering and data science, ideal for graduate students taking courses on statistical inference and detection and estimation, and an invaluable reference for researchers and professionals. With a wealth of illustrations and examples to explain the key features of the theory and to connect with real-world applications, additional material to explore more advanced concepts, and numerous end-of-chapter problems to test the reader's knowledge, this textbook is the 'go-to' guide for learning about the core principles of statistical inference and its application in engineering and data science. The password-protected solutions manual and the image gallery from the book are available online.

Arvustused

'This book presents a rigorous and comprehensive coverage of the concepts underlying modern statistical inference, and provides a lucid exposition of the fundamental concepts. A distinguishing feature of the book is the large number of thoughtfully constructed examples, which go a long way towards aiding the reader in understanding and assimilating the concepts. As no particular domain expertise is assumed other than probability theory, the book should be widely accessible to a broad readership.' Kannan Ramchandran, University of California, Berkeley 'A wide-ranging, rigorous, yet accessible account of hypothesis testing and estimation, the pillars of statistical signal processing, communications, and data science at large.' Tsachy Weissman, STMicroelectronics Chair, Founding Director of the Stanford Compression Forum, Stanford University, California

Muu info

A mathematically accessible textbook introducing all the tools needed to address modern inference problems in engineering and data science.
Preface xvii
List of Acronyms xx
1 Introduction
1(22)
1.1 Background
1(1)
1.2 Notation
1(3)
1.2.1 Probability Distributions
2(1)
1.2.2 Conditional Probability Distributions
2(1)
1.2.3 Expectations and Conditional Expectations
3(1)
1.2.4 Unified Notation
3(1)
1.2.5 General Random Variables
3(1)
1.3 Statistical Inference
4(3)
1.3.1 Statistical Model
5(1)
1.3.2 Some Generic Estimation Problems
6(1)
1.3.3 Some Generic Detection Problems
6(1)
1.4 Performance Analysis
7(1)
1.5 Statistical Decision Theory
7(5)
1.5.1 Conditional Risk and Optimal Decision Rules
8(1)
1.5.2 Bayesian Approach
9(1)
1.5.3 Minimax Approach
10(1)
1.5.4 Other Non-Bayesian Rules
11(1)
1.6 Derivation of Bayes Rule
12(2)
1.7 Link Between Minimax and Bayesian Decision Theory
14(4)
1.7.1 Dual Concept
14(1)
1.7.2 Game Theory
15(1)
1.7.3 Saddlepoint
15(1)
1.7.4 Randomized Decision Rules
16(2)
Exercises
18(3)
References
21(2)
Part I Hypothesis Testing 23(234)
2 Binary Hypothesis Testing
25(29)
2.1 General Framework
25(1)
2.2 Bayesian Binary Hypothesis Testing
26(6)
2.2.1 Likelihood Ratio Test
27(1)
2.2.2 Uniform Costs
28(1)
2.2.3 Examples
28(4)
2.3 Binary Minimax Hypothesis Testing
32(8)
2.3.1 Equalizer Rules
33(1)
2.3.2 Bayes Risk Line and Minimum Risk Curve
34(1)
2.3.3 Differentiable V(π0)
35(1)
2.3.4 Nondifferentiable V(π0)
35(2)
2.3.5 Randomized LRTs
37(1)
2.3.6 Examples
38(2)
2.4 Neyman-Pearson Hypothesis Testing
40(7)
2.4.1 Solution to the NP Optimization Problem
41(1)
2.4.2 NP Rule
42(1)
2.4.3 Receiver Operating Characteristic
43(1)
2.4.4 Examples
44(2)
2.4.5 Convex Optimization
46(1)
Exercises
47(7)
3 Multiple Hypothesis Testing
54(17)
3.1 General Framework
54(1)
3.2 Bayesian Hypothesis Testing
55(3)
3.2.1 Optimal Decision Regions
56(2)
3.2.2 Gaussian Ternary Hypothesis Testing
58(1)
3.3 Minimax Hypothesis Testing
58(4)
3.4 Generalized Neyman-Pearson Detection
62(1)
3.5 Multiple Binary Tests
62(5)
3.5.1 Bonferroni Correction
63(1)
3.5.2 False Discovery Rate
64(1)
3.5.3 Benjamini-Hochberg Procedure
65(1)
3.5.4 Connection to Bayesian Decision Theory
66(1)
Exercises
67(3)
References
70(1)
4 Composite Hypothesis Testing
71(34)
4.1 Introduction
71(1)
4.2 Random Parameter Θ
72(5)
4.2.1 Uniform Costs Over Each Hypothesis
73(3)
4.2.2 Nonuniform Costs Over Hypotheses
76(1)
4.3 Uniformly Most Powerful Test
77(5)
4.3.1 Examples
77(2)
4.3.2 Monotone Likelihood Ratio Theorem
79(1)
4.3.3 Both Composite Hypotheses
80(2)
4.4 Locally Most Powerful Test
82(2)
4.5 Generalized Likelihood Ratio Test
84(3)
4.5.1 GLRT for Gaussian Hypothesis Testing
84(2)
4.5.2 GLRT for Cauchy Hypothesis Testing
86(1)
4.6 Random versus Nonrandom Θ
87(1)
4.7 Non-Dominated Tests
88(2)
4.8 Composite m-ary Hypothesis Testing
90(2)
4.8.1 Random Parameter Θ
90(1)
4.8.2 Non-Dominated Tests
91(1)
4.8.3 m-GLRT
92(1)
4.9 Robust Hypothesis Testing
92(7)
4.9.1 Robust Detection with Conditionally Independent Observations
96(1)
4.9.2 Epsilon-Contamination Class
97(2)
Exercises
99(4)
References
103(2)
5 Signal Detection
105(40)
5.1 Introduction
105(1)
5.2 Problem Formulation
106(1)
5.3 Detection of Known Signal in Independent Noise
107(5)
5.3.1 Signal in i.i.d. Gaussian Noise
107(1)
5.3.2 Signal in i.i.d. Laplacian Noise
108(2)
5.3.3 Signal in i.i.d. Cauchy Noise
110(1)
5.3.4 Approximate NP Test
111(1)
5.4 Detection of Known Signal in Correlated Gaussian Noise
112(3)
5.4.1 Reduction to i.i.d. Noise Case
113(1)
5.4.2 Performance Analysis
114(1)
5.5 m-ary Signal Detection
115(2)
5.5.1 Bayes Classification Rule
116(1)
5.5.2 Performance Analysis
116(1)
5.6 Signal Selection
117(3)
5.6.1 i.i.d. Noise
118(1)
5.6.2 Correlated Noise
118(2)
5.7 Detection of Gaussian Signals in Gaussian Noise
120(7)
5.7.1 Detection of a Gaussian Signal in White Gaussian Noise
121(1)
5.7.2 Detection of i.i.d. Zero-Mean Gaussian Signal
122(1)
5.7.3 Diagonalization of Signal Covariance
123(2)
5.7.4 Performance Analysis
125(1)
5.7.5 Gaussian Signals With Nonzero Mean
126(1)
5.8 Detection of Weak Signals
127(1)
5.9 Detection of Signal with Unknown Parameters in White Gaussian Noise
128(7)
5.9.1 General Approach
129(1)
5.9.2 Linear Gaussian Model
130(1)
5.9.3 Nonlinear Gaussian Model
130(2)
5.9.4 Discrete Parameter Set
132(3)
5.10 Deflection-Based Detection of Non-Gaussian Signal in Gaussian Noise
135(4)
Exercises
139(4)
References
143(2)
6 Convex Statistical Distances
145(15)
6.1 Kullback-Leibler Divergence
145(2)
6.2 Entropy and Mutual Information
147(2)
6.3 Chernoff Divergence, Chernoff Information, and Bhattacharyya Distance
149(2)
6.4 Ali-Silvey Distances
151(4)
6.5 Some Useful Inequalities
155(1)
Exercises
156(2)
References
158(2)
7 Performance Bounds for Hypothesis Testing
160(24)
7.1 Simple Lower Bounds on Conditional Error Probabilities
160(2)
7.2 Simple Lower Bounds on Error Probability
162(1)
7.3 Chernoff Bound
163(4)
7.3.1 Moment-Generating and Cumulant-Generating Functions
163(1)
7.3.2 Chernoff Bound
164(3)
7.4 Application of Chernoff Bound to Binary Hypothesis Testing
167(6)
7.4.1 Exponential Upper Bounds on PF and PM
168(2)
7.4.2 Bayesian Error Probability
170(2)
7.4.3 Lower Bound on ROC
172(1)
7.4.4 Example
172(1)
7.5 Bounds on Classification Error Probability
173(5)
7.5.1 Upper and Lower Bounds in Terms of Pairwise Error Probabilities
173(3)
7.5.2 Bonferroni's Inequalities
176(1)
7.5.3 Generalized Fano's Inequality
176(2)
7.6 Appendix: Proof of Theorem 7.4
178(3)
Exercises
181(2)
References
183(1)
8 Large Deviations and Error Exponents for Hypothesis Testing
184(24)
8.1 Introduction
184(1)
8.2 Chernoff Bound for Sum of i.i.d. Random Variables
185(2)
8.2.1 Cramer's Theorem
185(1)
8.2.2 Why is the Central Limit Theorem Inapplicable Here?
186(1)
8.3 Hypothesis Testing with i.i.d. Observations
187(7)
8.3.1 Bayesian Hypothesis Testing with i.i.d. Observations
188(1)
8.3.2 Neyman-Pearson Hypothesis Testing with i.i.d. Observations
189(1)
8.3.3 Hoeffding Problem
189(2)
8.3.4 Example
191(3)
8.4 Refined Large Deviations
194(8)
8.4.1 The Method of Exponential Tilting
194(1)
8.4.2 Sum of i.i.d. Random Variables
195(3)
8.4.3 Lower Bounds on Large-Deviations Probabilities
198(1)
8.4.4 Refined Asymptotics for Binary Hypothesis Testing
199(1)
8.4.5 Non-i.i.d. Components
200(2)
8.5 Appendix: Proof of Lemma 8.1
202(1)
Exercises
203(3)
References
206(2)
9 Sequential and Quickest Change Detection
208(23)
9.1 Sequential Detection
208(9)
9.1.1 Problem Formulation
208(1)
9.1.2 Stopping Times and Decision Rules
209(1)
9.1.3 Two Formulations of the Sequential Hypothesis Testing Problem
209(1)
9.1.4 Sequential Probability Ratio Test
210(2)
9.1.5 SPRT Performance Evaluation
212(5)
9.2 Quickest Change Detection
217(10)
9.2.1 Minimax Quickest Change Detection
219(4)
9.2.2 Bayesian Quickest Change Detection
223(4)
Exercises
227(2)
References
229(2)
10 Detection of Random Processes
231(26)
10.1 Discrete-Time Random Processes
231(7)
10.1.1 Periodic Stationary Gaussian Processes
232(2)
10.1.2 Stationary Gaussian Processes
234(1)
10.1.3 Markov Processes
235(3)
10.2 Continuous-Time Processes
238(10)
10.2.1 Covariance Kernel
239(1)
10.2.2 Karhunen-Loeve Transform
240(4)
10.2.3 Detection of Known Signals in Gaussian Noise
244(2)
10.2.4 Detection of Gaussian Signals in Gaussian Noise
246(2)
10.3 Poisson Processes
248(2)
10.4 General Processes
250(3)
10.4.1 Likelihood Ratio
250(2)
10.4.2 Ali-Silvey Distances
252(1)
10.5 Appendix: Proof of Proposition 10.1
253(1)
Exercises
254(2)
References
256(1)
Part II Estimation 257(127)
11 Bayesian Parameter Estimation
259(21)
11.1 Introduction
259(1)
11.2 Bayesian Parameter Estimation
259(1)
11.3 MMSE Estimation
260(2)
11.4 MMAE Estimation
262(1)
11.5 MAP Estimation
263(2)
11.6 Parameter Estimation for Linear Gaussian Models
265(1)
11.7 Estimation of Vector Parameters
266(4)
11.7.1 Vector MMSE Estimation
267(1)
11.7.2 Vector MMAE Estimation
267(1)
11.7.3 Vector MAP Estimation
267(1)
11.7.4 Linear MMSE Estimation
268(1)
11.7.5 Vector Parameter Estimation in Linear Gaussian Models
269(1)
11.7.6 Other Cost Functions for Bayesian Estimation
270(1)
11.8 Exponential Families
270(6)
11.8.1 Basic Properties
271(2)
11.8.2 Conjugate Priors
273(3)
Exercises
276(3)
References
279(1)
12 Minimum Variance Unbiased Estimation
280(17)
12.1 Nonrandom Parameter Estimation
280(1)
12.2 Sufficient Statistics
281(2)
12.3 Factorization Theorem
283(1)
12.4 Rao-Blackwell Theorem
284(2)
12.5 Complete Families of Distributions
286(5)
12.5.1 Link Between Completeness and Sufficiency
288(1)
12.5.2 Link Between Completeness and MVUE
289(1)
12.5.3 Link Between Completeness and Exponential Families
289(2)
12.6 Discussion
291(1)
12.7 Examples: Gaussian Families
291(3)
Exercises
294(2)
References
296(1)
13 Information Inequality and Cramer-Rao Lower Bound
297(22)
13.1 Fisher Information and the Information Inequality
297(3)
13.2 Cramer-Rao Lower Bound
300(2)
13.3 Properties of Fisher Information
302(3)
13.4 Conditions for Equality in Information Inequality
305(1)
13.5 Vector Parameters
306(5)
13.6 Information Inequality for Random Parameters
311(1)
13.7 Biased Estimators
312(2)
13.8 Appendix: Derivation of (13.16)
314(1)
Exercises
315(3)
References
318(1)
14 Maximum Likelihood Estimation
319(39)
14.1 Introduction
319(1)
14.2 Computation of ML Estimates
320(2)
14.3 Invariance to Reparameterization
322(1)
14.4 MLE in Exponential Families
323(4)
14.4.1 Mean-Value Parameterization
324(1)
14.4.2 Relation to MVUEs
324(1)
14.4.3 Asymptotics
325(2)
14.5 Estimation of Parameters on Boundary
327(2)
14.6 Asymptotic Properties for General Families
329(5)
14.6.1 Consistency
329(2)
14.6.2 Asymptotic Efficiency and Normality
331(3)
14.7 Nonregular ML Estimation Problems
334(1)
14.8 Nonexistence of MLE
335(3)
14.9 Non-i.i.d. Observations
338(1)
14.10 M-Estimators and Least-Squares Estimators
338(1)
14.11 Expectation-Maximization (EM) Algorithm
339(8)
14.11.1 General Structure of the EM Algorithm
340(1)
14.11.2 Convergence of EM Algorithm
341(1)
14.11.3 Examples
341(6)
14.12 Recursive Estimation
347(3)
14.12.1 Recursive MLE
347(2)
14.12.2 Recursive Approximations to Least-Squares Solution
349(1)
14.13 Appendix: Proof of Theorem 14.2
350(1)
14.14 Appendix: Proof of Theorem 14.4
351(1)
Exercises
352(4)
References
356(2)
15 Signal Estimation
358(26)
15.1 Linear Innovations
358(2)
15.2 Discrete-Time Kalman Filter
360(7)
15.2.1 Time-Invariant Case
365(2)
15.3 Extended Kalman Filter
367(2)
15.4 Nonlinear Filtering for General Hidden Markov Models
369(3)
15.5 Estimation in Finite Alphabet Hidden Markov Models
372(9)
15.5.1 Viterbi Algorithm
373(2)
15.5.2 Forward-Backward Algorithm
375(3)
15.5.3 Baum-Welch Algorithm for HMM Learning
378(3)
Exercises
381(2)
References
383(1)
Appendix A Matrix Analysis 384(6)
Appendix B Random Vectors and Covariance Matrices 390(1)
Appendix C Probability Distributions 391(2)
Appendix D Convergence of Random Sequences 393(2)
Index 395
Pierre Moulin is a professor in the ECE Department at the University of Illinois, Urbana-Champaign. His research interests include statistical inference, machine learning, detection and estimation theory, information theory, statistical signal, image, and video processing, and information security. Moulin is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), and served as a Distinguished Lecturer for the IEEE Signal Processing Society. He has received two best paper awards from the IEEE Signal Processing Society and the US National Science Foundation CAREER Award. He was founding Editor-in-Chief of the IEEE Transactions on Information Security and Forensics. Venugopal V. Veeravalli is the Henry Magnuski Professor in the ECE Department at the University of Illinois, Urbana-Champaign. His research interests include statistical inference and machine learning, detection and estimation theory, and information theory, with applications to data science, wireless communications and sensor networks. Veeravalli is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), and served as a Distinguished Lecturer for the IEEE Signal Processing Society. Among the awards he has received are the IEEE Browder J. Thompson Best Paper Award, the National Science Foundation CAREER Award, the Presidential Early Career Award for Scientists and Engineers (PECASE), and the Wald Prize in Sequential Analysis.