Preface |
|
xiii | |
Authors |
|
xvii | |
|
1 Prelude: Preliminary Tools and Foundations |
|
|
1 | (30) |
|
|
1 | (1) |
|
|
1 | (1) |
|
|
2 | (1) |
|
1.4 Probability Distributions |
|
|
3 | (2) |
|
1.5 Commonly Used Parametric Distribution Functions |
|
|
5 | (1) |
|
1.6 Expectation and Integration |
|
|
6 | (2) |
|
1.7 Basic Modes of Convergence of a Sequence of Random Variables |
|
|
8 | (2) |
|
1.7.1 Convergence in Probability |
|
|
9 | (1) |
|
1.7.2 Almost Sure Convergence |
|
|
9 | (1) |
|
1.7.3 Convergence in Distribution |
|
|
9 | (1) |
|
1.7.4 Convergence in rth Mean |
|
|
10 | (1) |
|
1.7.5 0(·) and 0(·) Revised under Stochastic Regimes |
|
|
10 | (1) |
|
1.7.6 Basic Associations between the Modes of Convergence |
|
|
10 | (1) |
|
1.8 Indicator Functions and Their Bounds as Applied to Simple Proofs of Propositions |
|
|
10 | (2) |
|
|
12 | (2) |
|
|
14 | (2) |
|
1.11 Statistical Software: R and SAS |
|
|
16 | (15) |
|
|
16 | (6) |
|
|
22 | (9) |
|
2 Characteristic Function-Based Inference |
|
|
31 | (28) |
|
|
31 | (1) |
|
2.2 Elementary Properties of Characteristic Functions |
|
|
32 | (2) |
|
|
34 | (6) |
|
2.3.1 Proof of the Inversion Theorem |
|
|
35 | (5) |
|
|
40 | (19) |
|
2.4.1 Expected Lengths of Random Stopping Times and Renewal Functions in Light of Tauberian Theorems |
|
|
40 | (6) |
|
2.4.2 Risk-Efficient Estimation |
|
|
46 | (2) |
|
2.4.3 Khinchin's (or Hinchin's) Form of the Law of Large Numbers |
|
|
48 | (1) |
|
2.4.4 Analytical Forms of Distribution Functions |
|
|
49 | (1) |
|
2.4.5 Central Limit Theorem |
|
|
50 | (1) |
|
2.4.5.1 Principles of Monte Carlo Simulations |
|
|
51 | (3) |
|
2.4.5.2 That Is the Question: Do Convergence Rates Matter? |
|
|
54 | (1) |
|
2.4.6 Problems of Reconstructing the General Distribution Based on the Distribution of Some Statistics |
|
|
54 | (2) |
|
2.4.7 Extensions and Estimations of Families of Distribution Functions |
|
|
56 | (3) |
|
|
59 | (24) |
|
|
59 | (2) |
|
3.2 Why Likelihood? An Intuitive Point of View |
|
|
61 | (2) |
|
3.3 Maximum Likelihood Estimation |
|
|
63 | (6) |
|
|
69 | (4) |
|
3.5 The Intrinsic Relationship between the Likelihood Ratio Test Statistic and the Likelihood Ratio of Test Statistics: One More Reason to Use Likelihood |
|
|
73 | (3) |
|
3.6 Maximum Likelihood Ratio |
|
|
76 | (4) |
|
3.7 An Example of Correct Model-Based Likelihood Formation |
|
|
80 | (3) |
|
4 Martingale Type Statistics and Their Applications |
|
|
83 | (46) |
|
|
83 | (1) |
|
|
84 | (6) |
|
4.3 The Optional Stopping Theorem and Its Corollaries: Wald's Lemma and Doob's Inequality |
|
|
90 | (3) |
|
|
93 | (20) |
|
4.4.1 The Martingale Principle for Testing Statistical Hypotheses |
|
|
93 | (3) |
|
4.4.1.1 Maximum Likelihood Ratio in Light of the Martingale Concept |
|
|
96 | (1) |
|
4.4.1.2 Likelihood Ratios Based on Representative Values |
|
|
97 | (3) |
|
4.4.2 Guaranteed Type I Error Rate Control of the Likelihood Ratio Tests |
|
|
100 | (1) |
|
4.4.3 Retrospective Change Point Detection Policies |
|
|
100 | (1) |
|
4.4.3.1 The Cumulative Sum (CUSUM) Technique |
|
|
101 | (1) |
|
4.4.3.2 The Shiryayev-Roberts (SR) Statistic-Based Techniques |
|
|
102 | (3) |
|
4.4.4 Adaptive (Nonanticipating) Maximum Likelihood Estimation |
|
|
105 | (4) |
|
4.4.5 Sequential Change Point Detection Policies |
|
|
109 | (4) |
|
4.5 Transformation of Change Point Detection Methods into a Shiryayev-Roberts Form |
|
|
113 | (16) |
|
|
114 | (1) |
|
|
115 | (5) |
|
4.5.3 CUSUM versus Shiryayev-Roberts |
|
|
120 | (3) |
|
4.5.4 A Nonparametric Example |
|
|
123 | (1) |
|
4.5.5 Monte Carlo Simulation Study |
|
|
124 | (5) |
|
|
129 | (36) |
|
|
129 | (3) |
|
5.2 Integrated Most Powerful Tests |
|
|
132 | (4) |
|
|
136 | (25) |
|
5.3.1 The Computation of Bayes Factors |
|
|
137 | (5) |
|
5.3.1.1 Asymptotic Approximations |
|
|
142 | (5) |
|
5.3.1.2 Simple Monte Carlo, Importance Sampling, and Gaussian Quadrature |
|
|
147 | (1) |
|
5.3.1.3 Generating Samples from the Posterior |
|
|
147 | (1) |
|
5.3.1.4 Combining Simulation and Asymptotic Approximations |
|
|
148 | (1) |
|
5.3.2 The Choice of Prior Probability Distributions |
|
|
149 | (4) |
|
5.3.3 Decision-Making Rules Based on the Bayes Factor |
|
|
153 | (5) |
|
5.3.4 A Data Example: Application of the Bayes Factor |
|
|
158 | (3) |
|
|
161 | (4) |
|
6 A Brief Review of Sequential Methods |
|
|
165 | (20) |
|
|
165 | (2) |
|
|
167 | (2) |
|
6.3 Sequential Probability Ratio Test |
|
|
169 | (8) |
|
6.4 The Central Limit Theorem for a Stopping Time |
|
|
177 | (2) |
|
6.5 Group Sequential Tests |
|
|
179 | (2) |
|
6.6 Adaptive Sequential Designs |
|
|
181 | (1) |
|
|
182 | (1) |
|
6.8 Post-Sequential Analysis |
|
|
182 | (3) |
|
7 A Brief Review of Receiver Operating Characteristic Curve Analyses |
|
|
185 | (20) |
|
|
185 | (1) |
|
|
186 | (3) |
|
7.3 Area under the ROC Curve |
|
|
189 | (4) |
|
7.3.1 Parametric Approach |
|
|
190 | (2) |
|
7.3.2 Nonparametric Approach |
|
|
192 | (1) |
|
7.4 ROC Curve Analysis and Logistic Regression: Comparison and Overestimation |
|
|
193 | (8) |
|
7.4.1 Retrospective and Prospective ROC |
|
|
195 | (1) |
|
7.4.2 Expected Bias of the ROC Curve and Overestimation of the AUC |
|
|
196 | (2) |
|
|
198 | (3) |
|
7.5 Best Combinations Based on Values of Multiple Biomarkers |
|
|
201 | (1) |
|
|
202 | (1) |
|
7.5.2 Nonparametric Method |
|
|
202 | (1) |
|
|
202 | (3) |
|
8 The Ville and Wald Inequality: Extensions and Applications |
|
|
205 | (14) |
|
|
205 | (3) |
|
8.2 The Ville and Wald inequality |
|
|
208 | (1) |
|
8.3 The Ville and Wald Inequality Modified in Terms of Sums of iid Observations |
|
|
209 | (4) |
|
8.4 Applications to Statistical Procedures |
|
|
213 | (6) |
|
8.4.1 Confidence Sequences and Tests with Uniformly Small Error Probability for the Mean of a Normal Distribution with Known Variance |
|
|
213 | (2) |
|
8.4.2 Confidence Sequences for the Median |
|
|
215 | (2) |
|
8.4.3 Test with Power One |
|
|
217 | (2) |
|
9 Brief Comments on Confidence Intervals and p-Values |
|
|
219 | (20) |
|
|
219 | (4) |
|
|
223 | (16) |
|
9.2.1 The EPV in the Context of an ROC Curve Analysis |
|
|
226 | (1) |
|
9.2.2 Student's f-Test versus Welch's t-Test |
|
|
227 | (2) |
|
9.2.3 The Connection between EPV and Power |
|
|
229 | (2) |
|
9.2.4 f-Tests versus the Wilcoxon Rank-Sum Test |
|
|
231 | (5) |
|
|
236 | (3) |
|
|
239 | (20) |
|
|
239 | (1) |
|
10.2 Empirical Likelihood Methods |
|
|
239 | (3) |
|
10.3 Techniques for Analyzing Empirical Likelihoods |
|
|
242 | (11) |
|
10.3.1 Practical Theoretical Tools for Analyzing ELs |
|
|
249 | (4) |
|
10.4 Combining Likelihoods to Construct Composite Tests and to Incorporate the Maximum Data-Driven Information |
|
|
253 | (1) |
|
10.5 Bayesians and Empirical Likelihood |
|
|
254 | (1) |
|
10.6 Three Key Arguments That Support the Empirical Likelihood Methodology as a Practical Statistical Analysis Tool |
|
|
255 | (1) |
|
|
256 | (3) |
|
11 Jackknife and Bootstrap Methods |
|
|
259 | (46) |
|
|
259 | (2) |
|
11.2 Jackknife Bias Estimation |
|
|
261 | (2) |
|
11.3 Jackknife Variance Estimation |
|
|
263 | (1) |
|
11.4 Confidence Interval Definition |
|
|
263 | (1) |
|
11.5 Approximate Confidence Intervals |
|
|
264 | (1) |
|
11.6 Variance Stabilization |
|
|
265 | (1) |
|
|
266 | (1) |
|
11.8 Nonparametric Simulation |
|
|
267 | (3) |
|
11.9 Resampling Algorithms with SAS and R |
|
|
270 | (9) |
|
11.10 Bootstrap Confidence Intervals |
|
|
279 | (5) |
|
11.11 Use of the Edgeworth Expansion to Illustrate the Accuracy of Bootstrap Intervals |
|
|
284 | (4) |
|
11.12 Bootstrap-f Percentile Intervals |
|
|
288 | (15) |
|
11.13 Further Refinement of Bootstrap Confidence Intervals.291 |
|
|
|
|
303 | (2) |
|
12 Examples of Homework Questions |
|
|
305 | (10) |
|
|
305 | (1) |
|
|
306 | (1) |
|
|
306 | (1) |
|
|
307 | (1) |
|
|
308 | (3) |
|
|
311 | (4) |
|
|
315 | (48) |
|
|
315 | (1) |
|
|
315 | (1) |
|
|
316 | (1) |
|
|
317 | (1) |
|
|
318 | (1) |
|
|
319 | (1) |
|
|
320 | (2) |
|
|
322 | (2) |
|
|
324 | (1) |
|
|
325 | (1) |
|
|
326 | (1) |
|
|
326 | (2) |
|
|
328 | (1) |
|
|
329 | (2) |
|
|
331 | (1) |
|
|
332 | (1) |
|
|
333 | (2) |
|
|
335 | (1) |
|
|
336 | (3) |
|
|
339 | (1) |
|
|
340 | (1) |
|
|
340 | (2) |
|
|
342 | (3) |
|
|
345 | (3) |
|
|
348 | (2) |
|
|
350 | (4) |
|
|
354 | (3) |
|
|
357 | (2) |
|
|
359 | (4) |
|
14 Examples of Course Projects |
|
|
363 | (6) |
|
14.1 Change Point Problems in the Model of Logistic Regression Subject to Measurement Errors |
|
|
363 | (1) |
|
14.2 Bayesian Inference for Best Combinations Based on Values of Multiple Biomarkers |
|
|
363 | (1) |
|
14.3 Empirical Bayesian Inference for Best Combinations Based on Values of Multiple Biomarkers |
|
|
364 | (1) |
|
14.4 Best Combinations Based on Log Normally Distributed Values of Multiple Biomarkers |
|
|
364 | (1) |
|
14.5 Empirical Likelihood Ratio Tests for Parameters of Linear Regressions |
|
|
365 | (1) |
|
14.6 Penalized Empirical Likelihood Estimation |
|
|
366 | (1) |
|
14.7 An Improvement of the AUC-Based Interference |
|
|
366 | (1) |
|
14.8 Composite Estimation of the Mean based on Log-Normally Distributed Observations |
|
|
366 | (3) |
References |
|
369 | (12) |
Author Index |
|
381 | (6) |
Subject Index |
|
387 | |