Update cookies preferences

E-book: Linear Mixed Models: A Practical Guide Using Statistical Software

4.47/5 (18 ratings by Goodreads)
(University of Michigan, Ann Arbor, USA), (University of Michigan, Ann Arbor, USA), (University of Michigan, Ann Arbor, USA)
  • Format: 490 pages
  • Pub. Date: 24-Jun-2022
  • Publisher: Chapman & Hall/CRC
  • Language: eng
  • ISBN-13: 9781000598278
  • Format - EPUB+DRM
  • Price: 119,59 €*
  • * the price is final i.e. no additional discount will apply
  • Add to basket
  • Add to Wishlist
  • This ebook is for personal use only. E-Books are non-refundable.
  • Format: 490 pages
  • Pub. Date: 24-Jun-2022
  • Publisher: Chapman & Hall/CRC
  • Language: eng
  • ISBN-13: 9781000598278

DRM restrictions

  • Copying (copy/paste):

    not allowed

  • Printing:

    not allowed

  • Usage:

    Digital Rights Management (DRM)
    The publisher has supplied this book in encrypted form, which means that you need to install free software in order to unlock and read it.  To read this e-book you have to create Adobe ID More info here. Ebook can be read and downloaded up to 6 devices (single user with the same Adobe ID).

    Required software
    To read this ebook on a mobile device (phone or tablet) you'll need to install this free app: PocketBook Reader (iOS / Android)

    To download and read this eBook on a PC or Mac you need Adobe Digital Editions (This is a free app specially developed for eBooks. It's not the same as Adobe Reader, which you probably already have on your computer.)

    You can't read this ebook with Amazon Kindle

Highly recommended by JASA, Technometrics, and other leading statistical journals, the first two editions of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Third Edition continues to lead readers step-by-step through the process of fitting LMMs.

The third edition provides a comprehensive update of the available tools for fitting linear mixed-effects models in the newest versions of SAS, SPSS, R, Stata, and HLM. All examples have been updated, with a focus on new tools for visualization of results and interpretation. New conceptual and theoretical developments in mixed-effects modeling have been included, and there is a new chapter on power analysis for mixed-effects models.

Features:Dedicates an entire chapter to the key theories underlying LMMs for clustered, longitudinal, and repeated measures data Provides descriptions, explanations, and examples of software code necessary to fit LMMs in SAS, SPSS, R, Stata, and HLM Contains detailed tables of estimates and results, allowing for easy comparisons across software procedures Presents step-by-step analyses of real-world data sets that arise from a variety of research settings and study designs, including hypothesis testing, interpretation of results, and model diagnostics Integrates software code in each chapter to compare the relative advantages and disadvantages of each package Supplemented by a website with software code, datasets, additional documents, and updates

Ideal for anyone who uses software for statistical modeling, this book eliminates the need to read multiple software-specific texts by covering the most popular software programs for fitting LMMs in one handy guide. The authors illustrate the models and methods through real-world examples that enable comparisons of model-fitting options and results across the software procedures.

Reviews

". . . this book is perfect for readers who are looking for a quick reference to all kinds of situations in which LMMs are to be used. In the opinion of the reviewer, either this book or the Gaecki and Burzykowski (2013) book are a must for the practical statistician working with R. And the reviewer finds it helpful to have both on the shelf. For readers with a great need to incorporate novel data visualization approaches in their analyses and the need to improve result interpretation, the third edition is clearly superior to the second edition." ~ Andreas Ziegler, Biometrics Journal

"This book will definitively help researchers and statisticians reach a deeper understanding of linear mixed models and provides them with the resources to perform a proper analysis whatever their statistical software of choice." ~ Célia Touraine, ISCB Book Reviews

Preface to the Third Edition xv
Preface to the Second Edition xvii
Preface xix
The Authors xxi
Acknowledgments xxiii
List of Tables
xxv
List of Figures
xxvii
1 Introduction
1(8)
1.1 What are Linear Mixed Models (LMMs)?
1(4)
1.1.1 Models with Random Effects for Clustered Data
2(1)
1.1.2 Models for Longitudinal or Repeated-Measures Data
2(1)
1.1.3 The Purpose of This Book
3(1)
1.1.4 Outline of Book Contents
4(1)
1.2 A Brief History of LMMs
5(4)
1.2.1 Key Theoretical Developments
5(2)
1.2.2 Key Software Developments
7(2)
2 Linear Mixed Models: An Overview
9(50)
2.1 Introduction
9(6)
2.1.1 Types and Structures of Data Sets
9(1)
2.1.1.1 Clustered Data vs. Repeated-Measures and Longitudinal Data
9(1)
2.1.1.2 Levels of Data
10(2)
2.1.2 Types of Factors and Their Related Effects in an LMM
12(1)
2.1.2.1 Fixed Factors
12(1)
2.1.2.2 Random Factors
12(1)
2.1.2.3 Fixed Factors vs. Random Factors
13(1)
2.1.2.4 Fixed Effects vs. Random Effects
13(1)
2.1.2.5 Nested vs. Crossed Factors and Their Corresponding Effects
14(1)
2.2 Specification of LMMs
15(7)
2.2.1 General Specification for an Individual Observation
16(1)
2.2.2 General Matrix Specification
17(2)
2.2.2.1 Covariance Structures for the D Matrix
19(1)
2.2.2.2 Covariance Structures for the Ri Matrix
20(1)
2.2.2.3 Group-Specific Covariance Parameter Values for the D and Ri Matrices
21(1)
2.2.3 Alternative Matrix Specification for All Subjects
22(1)
2.2.4 Hierarchical Linear Model (HLM) Specification of the LMM
22(1)
2.3 The Marginal Linear Model
22(3)
2.3.1 Specification of the Marginal Model
23(1)
2.3.2 The Marginal Model Implied by an LMM
23(2)
2.4 Estimation in LMMs
25(4)
2.4.1 Maximum Likelihood (ML) Estimation
25(1)
2.4.1.1 Special Case: Assume θ Is Known
26(1)
2.4.1.2 General Case: Assume θ Is Unknown
27(1)
2.4.2 REML Estimation
28(1)
2.4.3 REML vs. ML Estimation
28(1)
2.5 Computational Issues
29(5)
2.5.1 Algorithms for Likelihood Function Optimization
29(2)
2.5.2 Computational Problems with Estimation of Covariance Parameters
31(3)
2.6 Tools for Model Selection
34(5)
2.6.1 Basic Concepts in Model Selection
34(1)
2.6.1.1 Nested Models
34(1)
2.6.1.2 Hypotheses: Specification and Testing
34(1)
2.6.2 Likelihood Ratio Tests (LRTs)
35(1)
2.6.2.1 Likelihood Ratio Tests for Fixed-Effect Parameters
35(1)
2.6.2.2 Likelihood Ratio Tests for Covariance Parameters
35(2)
2.6.3 Alternative Tests
37(1)
2.6.3.1 Alternative Tests for Fixed-Effect Parameters
37(1)
2.6.3.2 Alternative Tests for Covariance Parameters
38(1)
2.6.4 Information Criteria
38(1)
2.7 Model-Building Strategies
39(2)
2.7.1 The Top-Down Strategy
39(1)
2.7.2 The Step-Up Strategy
40(1)
2.8 Checking Model Assumptions (Diagnostics)
41(5)
2.8.1 Residual Diagnostics
41(1)
2.8.1.1 Raw Residuals
41(1)
2.8.1.2 Standardized and Studentized Residuals
42(1)
2.8.2 Influence Diagnostics
42(1)
2.8.3 Diagnostics for Random Effects
43(3)
2.9 Other Aspects of LMMs
46(10)
2.9.1 Predicting Random Effects: Best Linear Unbiased Predictors
46(1)
2.9.2 Intraclass Correlation Coefficients (ICCs)
47(1)
2.9.3 Problems with Model Specification (Aliasing)
47(2)
2.9.4 Missing Data
49(1)
2.9.5 Centering Covariates
50(1)
2.9.6 Fitting Linear Mixed Models to Complex Sample Survey Data
50(1)
2.9.6.1 Purely Model-Based Approaches
51(2)
2.9.6.2 Hybrid Design- and Model-Based Approaches
53(2)
2.9.7 Bayesian Analysis of Linear Mixed Models
55(1)
2.10
Chapter Summary
56(3)
3 Two-Level Models for Clustered Data: The Rat Pup Example
59(82)
3.1 Introduction
59(1)
3.2 The Rat Pup Study
59(6)
3.2.1 Study Description
59(3)
3.2.2 Data Summary
62(3)
3.3 Overview of the Rat Pup Data Analysis
65(8)
3.3.1 Analysis Steps
65(2)
3.3.2 Model Specification
67(1)
3.3.2.1 General Model Specification
67(3)
3.3.2.2 Hierarchical Model Specification
70(1)
3.3.3 Hypothesis Tests
70(3)
3.4 Analysis Steps in the Software Procedures
73(36)
3.4.1 SAS
73(12)
3.4.2 SPSS
85(6)
3.4.3 R
91(1)
3.4.3.1 Analysis Using the lme() Function
92(4)
3.4.3.2 Analysis Using the lmer() Function
96(3)
3.4.4 Stata
99(5)
3.4.5 HLM
104(1)
3.4.5.1 Data Set Preparation
104(1)
3.4.5.2 Preparing the Multivariate Data Matrix (MDM) File
105(4)
3.5 Results of Hypothesis Tests
109(3)
3.5.1 Likelihood Ratio Tests for Random Effects
109(1)
3.5.2 Likelihood Ratio Tests for Residual Error Variance
110(1)
3.5.3 F-Tests and Likelihood Ratio Tests for Fixed Effects
111(1)
3.6 Comparing Results across the Software Procedures
112(3)
3.6.1 Comparing Model 3.1 Results
112(3)
3.6.2 Comparing Model 3.2B Results
115(1)
3.6.3 Comparing Model 3.3 Results
115(1)
3.7 Interpreting Parameter Estimates in the Final Model
115(6)
3.7.1 Fixed-Effect Parameter Estimates
115(5)
3.7.2 Covariance Parameter Estimates
120(1)
3.8 Estimating the Intraclass Correlation Coefficients (ICCs)
121(3)
3.9 Calculating Predicted Values
124(1)
3.9.1 Litter-Specific (Conditional) Predicted Values
124(1)
3.9.2 Population-Averaged (Unconditional) Predicted Values
125(1)
3.10 Diagnostics for the Final Model
125(9)
3.10.1 Residual Diagnostics
125(1)
3.10.1.1 Conditional Residuals
125(2)
3.10.1.2 Conditional Studentized Residuals
127(2)
3.10.2 Distribution of BLUPs
129(1)
3.10.3 Influence Diagnostics
130(3)
3.10.3.1 Influence on Covariance Parameters
133(1)
3.10.3.2 Influence on Fixed Effects
134(1)
3.11 Software Notes and Recommendations
134(7)
3.11.1 Data Structure
134(1)
3.11.2 Syntax vs. Menus
135(1)
3.11.3 Heterogeneous Residual Error Variances for Level 2 Groups
135(1)
3.11.4 Display of the Marginal Covariance and Correlation Matrices
135(1)
3.11.5 Differences in Model Fit Criteria
135(1)
3.11.6 Differences in Tests for Fixed Effects
136(2)
3.11.7 Post-Hoc Comparisons of LS Means (Estimated Marginal Means)
138(1)
3.11.8 Calculation of Studentized Residuals and Influence Statistics
138(1)
3.11.9 Calculation of EBLUPs
138(1)
3.11.10 Tests for Covariance Parameters
138(1)
3.11.11 Reference Categories for Fixed Factors
139(2)
4 Three-Level Models for Clustered Data: The Classroom Example
141(68)
4.1 Introduction
141(2)
4.2 The Classroom Study
143(6)
4.2.1 Study Description
143(2)
4.2.2 Data Summary
145(1)
4.2.2.1 Data Set Preparation
145(1)
4.2.2.2 Preparing the Multivariate Data Matrix (MDM) File
145(4)
4.3 Overview of the Classroom Data Analysis
149(8)
4.3.1 Analysis Steps
149(2)
4.3.2 Model Specification
151(1)
4.3.2.1 General Model Specification
151(1)
4.3.2.2 Hierarchical Model Specification
152(2)
4.3.3 Hypothesis Tests
154(3)
4.4 Analysis Steps in the Software Procedures
157(29)
4.4.1 SAS
157(7)
4.4.2 SPSS
164(5)
4.4.3 R
169(1)
4.4.3.1 Analysis Using the lme() Function
169(3)
4.4.3.2 Analysis Using the lmer() Function
172(5)
4.4.4 Stata
177(3)
4.4.5 HLM
180(6)
4.5 Results of Hypothesis Tests
186(2)
4.5.1 Likelihood Ratio Tests for Random Effects
186(1)
4.5.2 Likelihood Ratio Tests and t-Tests for Fixed Effects
187(1)
4.6 Comparing Results Across the Software Procedures
188(7)
4.6.1 Comparing Model 4.1 Results
188(2)
4.6.2 Comparing Model 4.2 Results
190(1)
4.6.3 Comparing Model 4.3 Results
190(1)
4.6.4 Comparing Model 4.4 Results
190(5)
4.7 Interpreting Parameter Estimates in the Final Model
195(2)
4.7.1 Fixed-Effect Parameter Estimates
195(1)
4.7.2 Covariance Parameter Estimates
196(1)
4.8 Estimating the Intraclass Correlation Coefficients (ICCs)
197(2)
4.9 Calculating Predicted Values
199(2)
4.9.1 Conditional and Marginal Predicted Values
199(1)
4.9.2 Plotting Predicted Values Using HLM
200(1)
4.10 Diagnostics for the Final Model
201(4)
4.10.1 Plots of the EBLUPs
201(1)
4.10.2 Residual Diagnostics
202(3)
4.11 Software Notes
205(2)
4.11.1 REML vs. ML Estimation
205(1)
4.11.2 Setting up Three-Level Models in HLM
205(1)
4.11.3 Calculation of Degrees of Freedom for i-Tests in HLM
205(1)
4.11.4 Analyzing Cases with Complete Data
206(1)
4.11.5 Miscellaneous Differences
207(1)
4.12 Recommendations
207(2)
5 Models for Repeated-Measures Data: The Rat Brain Example
209(54)
5.1 Introduction
209(1)
5.2 The Rat Brain Study
209(5)
5.2.1 Study Description
209(3)
5.2.2 Data Summary
212(2)
5.3 Overview of the Rat Brain Data Analysis
214(8)
5.3.1 Analysis Steps
214(1)
5.3.2 Model Specification
215(1)
5.3.2.1 General Model Specification
215(4)
5.3.2.2 Hierarchical Model Specification
219(1)
5.3.3 Hypothesis Tests
220(2)
5.4 Analysis Steps in the Software Procedures
222(22)
5.4.1 SAS
222(5)
5.4.2 SPSS
227(2)
5.4.3 R
229(1)
5.4.3.1 Analysis Using the lme() Function
230(2)
5.4.3.2 Analysis Using the lmer() Function
232(4)
5.4.4 Stata
236(3)
5.4.5 HLM
239(1)
5.4.5.1 Data Set Preparation
239(1)
5.4.5.2 Preparing the MDM File
240(4)
5.5 Results of Hypothesis Tests
244(1)
5.5.1 Likelihood Ratio Tests for Random Effects
244(1)
5.5.2 Likelihood Ratio Tests for Residual Error Variance
244(1)
5.5.3 F-Tests for Fixed Effects
245(1)
5.6 Comparing Results across the Software Procedures
245(1)
5.6.1 Comparing Model 5.1 Results
246(1)
5.6.2 Comparing Model 5.2 Results
246(1)
5.7 Interpreting Parameter Estimates in the Final Model
246(7)
5.7.1 Fixed-Effect Parameter Estimates
246(6)
5.7.2 Covariance Parameter Estimates
252(1)
5.8 The Implied Marginal Covariance Matrix for the Final Model
253(1)
5.9 Diagnostics for the Final Model
254(4)
5.10 Software Notes
258(1)
5.10.1 Heterogeneous Residual Error Variances for Level 1 Groups
258(1)
5.10.2 EBLUPs for Multiple Random Effects
258(1)
5.11 Other Analytic Approaches
258(4)
5.11.1 Kronecker Product for More Flexible Residual Error Covariance Structures
258(2)
5.11.2 Fitting the Marginal Model
260(1)
5.11.3 Repeated-Measures ANOVA
261(1)
5.12 Recommendations
262(1)
6 Random Coefficient Models for Longitudinal Data: The Autism Example
263(60)
6.1 Introduction
263(1)
6.2 The Autism Study
263(6)
6.2.1 Study Description
263(2)
6.2.2 Data Summary
265(4)
6.3 Overview of the Autism Data Analysis
269(7)
6.3.1 Analysis Steps
269(2)
6.3.2 Model Specification
271(1)
6.3.2.1 General Model Specification
271(3)
6.3.2.2 Hierarchical Model Specification
274(1)
6.3.3 Hypothesis Tests
275(1)
6.4 Analysis Steps in the Software Procedures
276(24)
6.4.1 SAS
276(6)
6.4.2 SPSS
282(3)
6.4.3 R
285(1)
6.4.3.1 Analysis Using the lme() Function
286(2)
6.4.3.2 Analysis Using the liner() Function
288(4)
6.4.4 Stata
292(3)
6.4.5 HLM
295(1)
6.4.5.1 Data Set Preparation
295(1)
6.4.5.2 Preparing the MDM File
296(4)
6.5 Results of Hypothesis Tests
300(2)
6.5.1 Likelihood Ratio Test for Random Effects
300(1)
6.5.2 Likelihood Ratio Tests for Fixed Effects
301(1)
6.6 Comparing Results across the Software Procedures
302(4)
6.6.1 Comparing Model 6.1 Results
302(1)
6.6.2 Comparing Model 6.2 Results
302(4)
6.6.3 Comparing Model 6.3 Results
306(1)
6.7 Interpreting Parameter Estimates in the Final Model
306(3)
6.7.1 Fixed-Effect Parameter Estimates
306(2)
6.7.2 Covariance Parameter Estimates
308(1)
6.8 Calculating Predicted Values
309(4)
6.8.1 Marginal Predicted Values
309(2)
6.8.2 Conditional Predicted Values
311(2)
6.9 Diagnostics for the Final Model
313(5)
6.9.1 Residual Diagnostics
313(3)
6.9.2 Diagnostics for the Random Effects
316(1)
6.9.3 Observed and Predicted Values
317(1)
6.10 Software Note: Computational Problems with the D Matrix
318(1)
6.10.1 Recommendations
319(1)
6.11 An Alternative Approach: Fitting the Marginal Model with an Unstructured Covariance Matrix
319(4)
6.11.1 Recommendations
322(1)
7 Models for Clustered Longitudinal Data: The Dental Veneer Example
323(66)
7.1 Introduction
323(2)
7.2 The Dental Veneer Study
325(3)
7.2.1 Study Description
325(1)
7.2.2 Data Summary
326(2)
7.3 Overview of the Dental Veneer Data Analysis
328(10)
7.3.1 Analysis Steps
328(3)
7.3.2 Model Specification
331(1)
7.3.2.1 General Model Specification
331(2)
7.3.2.2 Hierarchical Model Specification
333(3)
7.3.3 Hypothesis Tests
336(2)
7.4 Analysis Steps in the Software Procedures
338(27)
7.4.1 SAS
338(7)
7.4.2 SPSS
345(4)
7.4.3 R
349(1)
7.4.3.1 Analysis Using the lme() Function
349(4)
7.4.3.2 Analysis Using the lmer() Function
353(3)
7.4.4 Stata
356(4)
7.4.5 HLM
360(1)
7.4.5.1 Data Set Preparation
360(1)
7.4.5.2 Preparing the Multivariate Data Matrix (MDM) File
361(4)
7.5 Results of Hypothesis Tests
365(1)
7.5.1 Likelihood Ratio Tests for Random Effects
365(1)
7.5.2 Likelihood Ratio Tests for Residual Error Variance
366(1)
7.5.3 Likelihood Ratio Tests for Fixed Effects
366(1)
7.6 Comparing Results across the Software Procedures
366(6)
7.6.1 Comparing Model 7.1 Results
366(3)
7.6.2 Comparing Results for Models 7.2A, 7.2B, and 7.2C
369(3)
7.6.3 Comparing Model 7.3 Results
372(1)
7.7 Interpreting Parameter Estimates in the Final Model
372(3)
7.7.1 Fixed-Effect Parameter Estimates
372(2)
7.7.2 Covariance Parameter Estimates
374(1)
7.8 The Implied Marginal Covariance Matrix for the Final Model
375(2)
7.9 Diagnostics for the Final Model
377(5)
7.9.1 Residual Diagnostics
378(1)
7.9.2 Diagnostics for the Random Effects
379(3)
7.10 Software Notes and Recommendations
382(3)
7.10.1 ML vs. REML Estimation
382(1)
7.10.2 The Ability to Remove Random Effects from a Model
382(1)
7.10.3 Considering Alternative Residual Error Covariance Structures
382(1)
7.10.4 Aliasing of Covariance Parameters
383(1)
7.10.5 Displaying the Marginal Covariance and Correlation Matrices
384(1)
7.10.6 Miscellaneous Software Notes
384(1)
7.11 Other Analytic Approaches
385(4)
7.11.1 Modeling the Covariance Structure
385(1)
7.11.2 The Step-Up vs. Step-Down Approach to Model Building
386(1)
7.11.3 Alternative Uses of Baseline Values for the Dependent Variable
386(3)
8 Models for Data with Crossed Random Factors: The SAT Score Example
389(30)
8.1 Introduction
389(1)
8.2 The SAT Score Study
389(5)
8.2.1 Study Description
389(2)
8.2.2 Data Summary
391(3)
8.3 Overview of the SAT Score Data Analysis
394(2)
8.3.1 Model Specification
394(1)
8.3.1.1 General Model Specification
394(1)
8.3.1.2 Hierarchical Model Specification
395(1)
8.3.2 Hypothesis Tests
395(1)
8.4 Analysis Steps in the Software Procedures
396(14)
8.4.1 SAS
396(5)
8.4.2 SPSS
401(2)
8.4.3 R
403(3)
8.4.4 Stata
406(1)
8.4.5 HLM
407(1)
8.4.5.1 Data Set Preparation
407(1)
8.4.5.2 Preparing the MDM File
408(1)
8.4.5.3 Model Fitting
409(1)
8.5 Results of Hypothesis Tests
410(1)
8.5.1 Likelihood Ratio Tests for Random Effects
410(1)
8.5.2 Testing the Fixed Year Effect
411(1)
8.6 Comparing Results across the Software Procedures
411(1)
8.7 Interpreting Parameter Estimates in the Final Model
411(4)
8.7.1 Fixed-Effect Parameter Estimates
412(1)
8.7.2 Covariance Parameter Estimates
412(3)
8.8 The Implied Marginal Covariance Matrix for the Final Model
415(1)
8.9 Recommended Diagnostics for the Final Model
416(1)
8.10 Software Notes and Additional Recommendations
417(2)
9 Power Analysis and Sample Size Calculations for Linear Mixed Models
419(18)
9.1 Introduction
419(1)
9.2 Direct Power Computations
419(10)
9.2.1 Software for Direct Power Computations
420(1)
9.2.2 Examples of Direct Power Computations
420(9)
9.3 Examining Power via Simulation
429(8)
9.3.1 Examples of Simulation-Based Approaches
430(7)
A Statistical Software Resources
437(8)
A.1 Descriptions/Availability of Software Packages
437(4)
A.1.1 SAS
437(1)
A.1.2 IBM SPSS Statistics
437(1)
A.1.3 R
437(1)
A.1.4 Stata
438(1)
A.1.5 HLM
438(1)
A.2 Useful Internet Links
438(3)
B Calculation of the Marginal Covariance Matrix
441(2)
C Acronyms / Abbreviations
443(2)
Bibliography 445(8)
Index 453
Brady T. West is a research professor in the Survey Methodology Program, located within the Survey Research Center at the Institute for Social Research (ISR) on the University of Michigan-Ann Arbor (U-M) campus. He earned his PhD from the Michigan Program in Survey and Data Science (formerly the Michigan Program in Survey Methodology) in 2011. Before that, he received an MA in Applied Statistics from the U-M Statistics Department in 2002, being recognized as an Outstanding First-year Applied Masters student, and a BS in Statistics with Highest Honors and Highest Distinction from the U-M Statistics Department in 2001. His current research interests include total survey error / total data quality, responsive and adaptive survey design, interviewer effects, survey paradata, the analysis of complex sample survey data, and multilevel regression models for clustered and longitudinal data. He has developed short courses on statistical analysis using SAS, SPSS, R, Stata, and HLM, and regularly consults on the use of procedures in these software packages for the analysis of longitudinal and clustered data. The author or co-author of more than 180 peer-reviewed publications and three edited volumes on survey methodology, he is also a co-author of a book entitled Applied Survey Data Analysis (with Steven Heeringa and Patricia Berglund), the second edition of which was published by Chapman Hall in 2017. He lives in Dexter, Michigan with his wife Laura, his son Carter, and his daughter Everleigh.









Kathy Welch is a retired former senior statistician and statistical software consultant at CSCAR (Consulting for Statistics, Computing & Analytics Research) at the University of Michigan, Ann Arbor. She received a B.A. in sociology (1969), an M.P.H. in epidemiology and health education (1975), and an M.S. in biostatistics (1984) from the University of Michigan (UM). During her career, she regularly consulted on the use of SAS, SPSS, Stata, and HLM for analysis of clustered and longitudinal data, taught a course on statistical software packages in the University of Michigan Department of Biostatistics, and taught short courses on SAS software. She also co-developed and co-taught a course on analysis of data from clustered and longitudinal studies at the School of Public Health at the University of Michigan.









Andrzej Galecki is a research professor in the Division of Geriatric Medicine, Department of Internal Medicine, and Institute of Gerontology at the University of Michigan Medical School, and in the Department of Biostatistics at the University of Michigan School of Public Health. He received a M.Sc. in applied mathematics (1977) from the Technical University of Warsaw, Poland, and an M.D. (1981) from the Medical Academy of Warsaw. In 1985 he earned a Ph.D. in epidemiology from the Institute of Mother and Child Care in Warsaw (Poland). Since 1990, Dr. Galecki has collaborated with researchers in gerontology and geriatrics. His research interests lie in the development and application of statistical methods for analyzing correlated and over-dispersed data. He developed the SAS macro NLMEM for nonlinear mixed-effects models, specified as a solution of ordinary differential equations. His research (Galecki, 1994) on a general class of covariance structures for two or more within-subject factors is considered to be one of the very first approaches to the joint modeling of multiple outcomes. Examples of these structures have been implemented in SAS proc mixed and the MIXED command in SPSS. In 2015 he was selected as a Fellow of the American Statistical Association. He is also a co-author of more than 120 publications.









Brenda Gillespie is the associate director of CSCAR (Consulting for Statistics, Computing & Analytics Research) and a research associate professor of Biostatistics at the University of Michigan, Ann Arbor. She received an A.B. in mathematics (1972) from Earlham College in Richmond, Indiana, an M.S. in statistics (1975) from The Ohio State University, and earned a Ph.D. in statistics (1989) from Temple University in Philadelphia, Pennsylvania. Dr. Gillespie has collaborated extensively with researchers in health-related fields, and has worked with mixed models as the primary statistician on the Collaborative Initial Glaucoma Treatment Study (CIGTS), the Dialysis Outcomes Practice Pattern Study (DOPPS), the Scientific Registry of Transplant Recipients (SRTR), the University of Michigan Dioxin Study, and at the Complementary and Alternative Medicine Research Center at the University of Michigan.