Muutke küpsiste eelistusi

Statistics in the Health Sciences: Theory, Applications, and Computing [Kõva köide]

(Roswell Park Cancer Institute, Buffalo, New York, USA),
  • Formaat: Hardback, 416 pages, kõrgus x laius: 234x156 mm, kaal: 740 g, 10 Tables, black and white; 29 Line drawings, black and white; 1 Halftones, black and white; 30 Illustrations, black and white
  • Ilmumisaeg: 07-Feb-2018
  • Kirjastus: CRC Press
  • ISBN-10: 1138196894
  • ISBN-13: 9781138196896
Teised raamatud teemal:
  • Formaat: Hardback, 416 pages, kõrgus x laius: 234x156 mm, kaal: 740 g, 10 Tables, black and white; 29 Line drawings, black and white; 1 Halftones, black and white; 30 Illustrations, black and white
  • Ilmumisaeg: 07-Feb-2018
  • Kirjastus: CRC Press
  • ISBN-10: 1138196894
  • ISBN-13: 9781138196896
Teised raamatud teemal:

"This very informative book introduces classical and novel statistical methods that can be used by theoretical and applied biostatisticians to develop efficient solutions for real-world problems encountered in clinical trials and epidemiological studies. The authors provide a detailed discussion of methodological and applied issues in parametric, semi-parametric and nonparametric approaches, including computationally extensive data-driven techniques, such as empirical likelihood, sequential procedures, and bootstrap methods. Many of these techniques are implemented using popular software such as R and SAS."— Vlad Dragalin, Professor, Johnson and Johnson, Spring House, PA

"It is always a pleasure to come across a new book that covers nearly all facets of a branch of science one thought was so broad, so diverse, and so dynamic that no single book could possibly hope to capture all of the fundamentals as well as directions of the field. The topics within the book’s purview—fundamentals of measure-theoretic probability; parametric and non-parametric statistical inference; central limit theorems; basics of martingale theory; Monte Carlo methods; sequential analysis; sequential change-point detection—are all covered with inspiring clarity and precision. The authors are also very thorough and avail themselves of the most recent scholarship. They provide a detailed account of the state of the art, and bring together results that were previously scattered across disparate disciplines. This makes the book more than just a textbook: it is a panoramic companion to the field of Biostatistics. The book is self-contained, and the concise but careful exposition of material makes it accessible to a wide audience. This is appealing to graduate students interested in getting into the field, and also to professors looking to design a course on the subject." — Aleksey S. Polunchenko, Department of Mathematical Sciences, State University of New York at Binghamton

This book should be appropriate for use both as a text and as a reference. This book delivers a "ready-to-go" well-structured product to be employed in developing advanced courses. In this book the readers can find classical and new theoretical methods, open problems and new procedures.

The book presents biostatistical results that are novel to the current set of books on the market and results that are even new with respect to the modern scientific literature. Several of these results can be found only in this book.

Arvustused

"This very informative book introduces classical and novel statistical methods that can be used by theoretical and applied biostatisticians to develop efficient solutions for real-world problems encountered in clinical trials and epidemiological studies. The authors provide a detailed discussion of methodological and applied issues in parametric, semi-parametric and nonparametric approaches, including computationally extensive data-driven techniques, such as empirical likelihood, sequential procedures, and bootstrap methods. Many of these techniques are implemented using popular software such as R and SAS. " Vlad Dragalin, Vice President and Scientific Fellow, Quantitative Sciences, Johnson and Johnson

"It is always a pleasure to come across a new book that covers nearly all facets of a branch of science one thought was so broad, so diverse, and so dynamic that no single book could possibly hope to capture all of the fundamentals as well as directions of the field. Biostatistics is just such a branch of science and Statistics in the Health Sciences: Theory, Applications, and Computing is just such a book. Written by "lions" of the field, the book is an excellent piece of work that establishes an important bridge between Biostatistics and its numerous interfaces. The topics within the books purviewfundamentals of measure-theoretic probability; parametric and non-parametric statistical inference; central limit theorems; basics of martingale theory; Monte Carlo methods; sequential analysis; sequential change-point detectionare all covered with inspiring clarity and precision. The authors are also very thorough and avail themselves of the most recent scholarship. They provide a detailed account of the state of the art, and bring together results that were previously scattered across disparate disciplines. This makes the book more than just a textbook: it is a panoramic companion to the field of Biostatistics. The book is self-contained, and the concise but careful exposition of material makes it accessible to a wide audience. The book also offers numerous problems and computer projects, including data processing exercises, which range in complexity and degree of sophistication from introductory to fairly advanced. This is appealing to graduate students interested in getting into the field, and also to professors looking to design a course on the subject. To sum up, the book is a valuable addition to the literature, and certainly deserves a spot in the library. Perhaps the best way to express ones gratitude to the authors would be to read the book." Aleksey S. Polunchenko, Ph.D., Department of Mathematical Sciences, State University of New York at Binghamton

Preface xiii
Authors xvii
1 Prelude: Preliminary Tools and Foundations
1(30)
1.1 Introduction
1(1)
1.2 Limits
1(1)
1.3 Random Variables
2(1)
1.4 Probability Distributions
3(2)
1.5 Commonly Used Parametric Distribution Functions
5(1)
1.6 Expectation and Integration
6(2)
1.7 Basic Modes of Convergence of a Sequence of Random Variables
8(2)
1.7.1 Convergence in Probability
9(1)
1.7.2 Almost Sure Convergence
9(1)
1.7.3 Convergence in Distribution
9(1)
1.7.4 Convergence in rth Mean
10(1)
1.7.5 0(·) and 0(·) Revised under Stochastic Regimes
10(1)
1.7.6 Basic Associations between the Modes of Convergence
10(1)
1.8 Indicator Functions and Their Bounds as Applied to Simple Proofs of Propositions
10(2)
1.9 Taylor's Theorem
12(2)
1.10 Complex Variables
14(2)
1.11 Statistical Software: R and SAS
16(15)
1.11.1 R Software
16(6)
1.11.2 SAS Software
22(9)
2 Characteristic Function-Based Inference
31(28)
2.1 Introduction
31(1)
2.2 Elementary Properties of Characteristic Functions
32(2)
2.3 One-to-One Mapping
34(6)
2.3.1 Proof of the Inversion Theorem
35(5)
2.4 Applications
40(19)
2.4.1 Expected Lengths of Random Stopping Times and Renewal Functions in Light of Tauberian Theorems
40(6)
2.4.2 Risk-Efficient Estimation
46(2)
2.4.3 Khinchin's (or Hinchin's) Form of the Law of Large Numbers
48(1)
2.4.4 Analytical Forms of Distribution Functions
49(1)
2.4.5 Central Limit Theorem
50(1)
2.4.5.1 Principles of Monte Carlo Simulations
51(3)
2.4.5.2 That Is the Question: Do Convergence Rates Matter?
54(1)
2.4.6 Problems of Reconstructing the General Distribution Based on the Distribution of Some Statistics
54(2)
2.4.7 Extensions and Estimations of Families of Distribution Functions
56(3)
3 Likelihood Tenet
59(24)
3.1 Introduction
59(2)
3.2 Why Likelihood? An Intuitive Point of View
61(2)
3.3 Maximum Likelihood Estimation
63(6)
3.4 The Likelihood Ratio
69(4)
3.5 The Intrinsic Relationship between the Likelihood Ratio Test Statistic and the Likelihood Ratio of Test Statistics: One More Reason to Use Likelihood
73(3)
3.6 Maximum Likelihood Ratio
76(4)
3.7 An Example of Correct Model-Based Likelihood Formation
80(3)
4 Martingale Type Statistics and Their Applications
83(46)
4.1 Introduction
83(1)
4.2 Terminology
84(6)
4.3 The Optional Stopping Theorem and Its Corollaries: Wald's Lemma and Doob's Inequality
90(3)
4.4 Applications
93(20)
4.4.1 The Martingale Principle for Testing Statistical Hypotheses
93(3)
4.4.1.1 Maximum Likelihood Ratio in Light of the Martingale Concept
96(1)
4.4.1.2 Likelihood Ratios Based on Representative Values
97(3)
4.4.2 Guaranteed Type I Error Rate Control of the Likelihood Ratio Tests
100(1)
4.4.3 Retrospective Change Point Detection Policies
100(1)
4.4.3.1 The Cumulative Sum (CUSUM) Technique
101(1)
4.4.3.2 The Shiryayev-Roberts (SR) Statistic-Based Techniques
102(3)
4.4.4 Adaptive (Nonanticipating) Maximum Likelihood Estimation
105(4)
4.4.5 Sequential Change Point Detection Policies
109(4)
4.5 Transformation of Change Point Detection Methods into a Shiryayev-Roberts Form
113(16)
4.5.1 Motivation
114(1)
4.5.2 The Method
115(5)
4.5.3 CUSUM versus Shiryayev-Roberts
120(3)
4.5.4 A Nonparametric Example
123(1)
4.5.5 Monte Carlo Simulation Study
124(5)
5 Bayes Factor
129(36)
5.1 Introduction
129(3)
5.2 Integrated Most Powerful Tests
132(4)
5.3 Bayes Factor
136(25)
5.3.1 The Computation of Bayes Factors
137(5)
5.3.1.1 Asymptotic Approximations
142(5)
5.3.1.2 Simple Monte Carlo, Importance Sampling, and Gaussian Quadrature
147(1)
5.3.1.3 Generating Samples from the Posterior
147(1)
5.3.1.4 Combining Simulation and Asymptotic Approximations
148(1)
5.3.2 The Choice of Prior Probability Distributions
149(4)
5.3.3 Decision-Making Rules Based on the Bayes Factor
153(5)
5.3.4 A Data Example: Application of the Bayes Factor
158(3)
5.4 Remarks
161(4)
6 A Brief Review of Sequential Methods
165(20)
6.1 Introduction
165(2)
6.2 Two-Stage Designs
167(2)
6.3 Sequential Probability Ratio Test
169(8)
6.4 The Central Limit Theorem for a Stopping Time
177(2)
6.5 Group Sequential Tests
179(2)
6.6 Adaptive Sequential Designs
181(1)
6.7 Futility Analysis
182(1)
6.8 Post-Sequential Analysis
182(3)
7 A Brief Review of Receiver Operating Characteristic Curve Analyses
185(20)
7.1 Introduction
185(1)
7.2 ROC Curve Inference
186(3)
7.3 Area under the ROC Curve
189(4)
7.3.1 Parametric Approach
190(2)
7.3.2 Nonparametric Approach
192(1)
7.4 ROC Curve Analysis and Logistic Regression: Comparison and Overestimation
193(8)
7.4.1 Retrospective and Prospective ROC
195(1)
7.4.2 Expected Bias of the ROC Curve and Overestimation of the AUC
196(2)
7.4.3 Example
198(3)
7.5 Best Combinations Based on Values of Multiple Biomarkers
201(1)
7.5.1 Parametric Method
202(1)
7.5.2 Nonparametric Method
202(1)
7.6 Remarks
202(3)
8 The Ville and Wald Inequality: Extensions and Applications
205(14)
8.1 Introduction
205(3)
8.2 The Ville and Wald inequality
208(1)
8.3 The Ville and Wald Inequality Modified in Terms of Sums of iid Observations
209(4)
8.4 Applications to Statistical Procedures
213(6)
8.4.1 Confidence Sequences and Tests with Uniformly Small Error Probability for the Mean of a Normal Distribution with Known Variance
213(2)
8.4.2 Confidence Sequences for the Median
215(2)
8.4.3 Test with Power One
217(2)
9 Brief Comments on Confidence Intervals and p-Values
219(20)
9.1 Confidence Intervals
219(4)
9.2 p-Values
223(16)
9.2.1 The EPV in the Context of an ROC Curve Analysis
226(1)
9.2.2 Student's f-Test versus Welch's t-Test
227(2)
9.2.3 The Connection between EPV and Power
229(2)
9.2.4 f-Tests versus the Wilcoxon Rank-Sum Test
231(5)
9.2.5 Discussion
236(3)
10 Empirical Likelihood
239(20)
10.1 Introduction
239(1)
10.2 Empirical Likelihood Methods
239(3)
10.3 Techniques for Analyzing Empirical Likelihoods
242(11)
10.3.1 Practical Theoretical Tools for Analyzing ELs
249(4)
10.4 Combining Likelihoods to Construct Composite Tests and to Incorporate the Maximum Data-Driven Information
253(1)
10.5 Bayesians and Empirical Likelihood
254(1)
10.6 Three Key Arguments That Support the Empirical Likelihood Methodology as a Practical Statistical Analysis Tool
255(1)
Appendix
256(3)
11 Jackknife and Bootstrap Methods
259(46)
11.1 Introduction
259(2)
11.2 Jackknife Bias Estimation
261(2)
11.3 Jackknife Variance Estimation
263(1)
11.4 Confidence Interval Definition
263(1)
11.5 Approximate Confidence Intervals
264(1)
11.6 Variance Stabilization
265(1)
11.7 Bootstrap Methods
266(1)
11.8 Nonparametric Simulation
267(3)
11.9 Resampling Algorithms with SAS and R
270(9)
11.10 Bootstrap Confidence Intervals
279(5)
11.11 Use of the Edgeworth Expansion to Illustrate the Accuracy of Bootstrap Intervals
284(4)
11.12 Bootstrap-f Percentile Intervals
288(15)
11.13 Further Refinement of Bootstrap Confidence Intervals.291
11.14 Bootstrap Tilting
303(2)
12 Examples of Homework Questions
305(10)
12.1 Homework 1
305(1)
12.2 Homework 2
306(1)
12.3 Homework 3
306(1)
12.4 Homework 4
307(1)
12.5 Homework 5
308(3)
12.6 Homeworks 6 and 7
311(4)
13 Examples of Exams
315(48)
13.1 Midterm Exams
315(1)
Example 1
315(1)
Example 2
316(1)
Example 3
317(1)
Example 4
318(1)
Example 5
319(1)
Example 6
320(2)
Example 7
322(2)
Example 8
324(1)
Example 9
325(1)
13.2 Final Exams
326(1)
Example 1
326(2)
Example 2
328(1)
Example 3
329(2)
Example 4
331(1)
Example 5
332(1)
Example 6
333(2)
Example 7
335(1)
Example 8
336(3)
Example 9
339(1)
13.3 Qualifying Exams
340(1)
Example 1
340(2)
Example 2
342(3)
Example 3
345(3)
Example 4
348(2)
Example 5
350(4)
Example 6
354(3)
Example 7
357(2)
Example 8
359(4)
14 Examples of Course Projects
363(6)
14.1 Change Point Problems in the Model of Logistic Regression Subject to Measurement Errors
363(1)
14.2 Bayesian Inference for Best Combinations Based on Values of Multiple Biomarkers
363(1)
14.3 Empirical Bayesian Inference for Best Combinations Based on Values of Multiple Biomarkers
364(1)
14.4 Best Combinations Based on Log Normally Distributed Values of Multiple Biomarkers
364(1)
14.5 Empirical Likelihood Ratio Tests for Parameters of Linear Regressions
365(1)
14.6 Penalized Empirical Likelihood Estimation
366(1)
14.7 An Improvement of the AUC-Based Interference
366(1)
14.8 Composite Estimation of the Mean based on Log-Normally Distributed Observations
366(3)
References 369(12)
Author Index 381(6)
Subject Index 387
Albert Vexler is a tenured professor in the Department of Biostatistics at the State University of New York (SUNY) at Buffalo. Dr. Vexler is the associate editor of Biometrics and BMC Medical Research Methodology. He is the author and coauthor of various publications that contribute to the theoretical and applied aspects of statistics in medical research. Many of his papers and statistical software developments have appeared in statistical and biostatistical journals that have top-rated impact factors and are historically recognized as leading scientific journals. Dr. Vexler was awarded a National Institutes of Health grant to develop novel nonparametric data analysis and statistical methodology. His research interests include receiver operating characteristic curve analysis, measurement error, optimal designs, regression models, censored data, change point problems, sequential analysis, statistical epidemiology, Bayesian decision-making mechanisms, asymptotic methods of statistics, forecasting, sampling, optimal testing, nonparametric tests, empirical likelihoods, renewal theory, Tauberian theorems, time series, categorical analysis, multivariate analysis, multivariate testing of complex hypotheses, factor and principal component analysis, statistical biomarker evaluations, and best combinations of biomarkers.



Alan D. Hutson is the chair of biostatistics and bioinformatics at Roswell Park Cancer Institute. He is also the biostatistical, epidemiological, and research design director for SUNYs National Institutes of Healthfunded Clinical and Translational Research Award. Dr. Hutson is a fellow of the American Statistical Association, the associate editor of Communications in Statistics and the Sri Lankan Journal of Applied Statistics, and a New York State NYSTAR Distinguished Professor. He has written more than 200 peer-reviewed publications. Dr. Hutsons methodological work focuses on nonparametric methods for biostatistical applications as they pertain to statistical functionals. He also has several years of experience in the design and analysis of clinical trials.