Muutke küpsiste eelistusi

Longitudinal Data Analysis for the Behavioral Sciences Using R [Kõva köide]

  • Formaat: Hardback, 568 pages, kõrgus x laius: 254x177 mm, kaal: 1230 g
  • Ilmumisaeg: 13-Dec-2011
  • Kirjastus: SAGE Publications Inc
  • ISBN-10: 1412982685
  • ISBN-13: 9781412982689
Teised raamatud teemal:
  • Formaat: Hardback, 568 pages, kõrgus x laius: 254x177 mm, kaal: 1230 g
  • Ilmumisaeg: 13-Dec-2011
  • Kirjastus: SAGE Publications Inc
  • ISBN-10: 1412982685
  • ISBN-13: 9781412982689
Teised raamatud teemal:
This volume on research technologies in the behavioral sciences provides graduate level students and researchers with detailed information on the development of robust longitudinal analysis tools using the statistical modeling application R. The work provides both theoretical and practical explanations of analysis processes, and covers such topics as data structures for longitudinal analysis, graphing data, linear mixed effects regression, maximum likelihood estimation, multi-model inference, likelihood ratio testing, time predictors and modeling non-linear change. Chapters include detailed R examples, illustrations and tables and access to additional online resources, including data sets used in the text and R code snippets, is provided. Long is a professor of psychiatry at the University of Iowa. Annotation ©2011 Book News, Inc., Portland, OR (booknews.com)

This book is unique in its focus on showing students in the behavioral sciences how to analyze longitudinal data using R software. The book focuses on application, making it practical and accessible to students in psychology, education, and related fields, who have a basic foundation in statistics. It provides explicit instructions in R computer programming throughout the book, showing students exactly how a specific analysis is carried out and how output is interpreted.

About the Author xv
Preface xvii
1 Introduction
1(32)
1.1 Statistical Computing
3(1)
1.2 Preliminary Issues
3(5)
1.2.1 Means Versus Correlations
3(4)
1.2.2 Measurement Issues
7(1)
1.2.3 Response Variable Assumptions
8(1)
1.3 Conceptual Overview of Linear Mixed Effects Regression
8(11)
1.3.1 Goals of Inference
9(4)
1.3.2 Random Effects
13(5)
1.3.3 How Important Are Random Effects?
18(1)
1.4 Traditional Approaches
19(2)
1.5 MPLS Data Set
21(2)
1.6 Statistical Strategy
23(4)
1.7 LMER and Multimodel Inference
27(5)
1.7.1 Statistical Hypotheses
27(5)
1.8 Overview of the Remainder of the Book
32(1)
2 Brief Introduction to R
33(30)
2.1 Obtaining and Installing R
34(1)
2.2 Functions and Packages
35(1)
2.3 Essential Syntax
36(5)
2.3.1 Prompt Versus Script Files
36(1)
2.3.2 Input and Output Appearance in This Book
36(1)
2.3.3 Quitting R
37(1)
2.3.4 Terminating a Process
37(1)
2.3.5 Basic Calculations
38(1)
2.3.6 Objects
38(1)
2.3.7 Concatenation
39(1)
2.3.8 Statistical Functions
40(1)
2.4 Data Types
41(3)
2.4.1 Missing Values
43(1)
2.5 Matrices, Data Frames, and Lists
44(5)
2.5.1 Vector
44(1)
2.5.2 Matrix
45(1)
2.5.3 Data Frame
45(1)
2.5.4 List
46(3)
2.6 Indexing
49(5)
2.6.1 Matrix and Data Frame
49(1)
2.6.2 Vector
50(1)
2.6.3 List
51(1)
2.6.4 Sorting
52(1)
2.6.5 Recoding
52(1)
2.6.6 Saving Objects
53(1)
2.6.7 Loading and Listing Objects
53(1)
2.7 User-Defined Functions
54(1)
2.8 Repetitive Operations
55(4)
2.8.1 rdply ()
55(2)
2.8.2 for () Loop
57(2)
2.9 Linear Regression
59(2)
2.10 Getting Help
61(1)
2.11 Summary of Functions
61(2)
3 Data Structures and Longitudinal Analysis
63(42)
3.1 Longitudinal Data Structures
63(2)
3.1.1 Wide Format
64(1)
3.1.2 Long Format
64(1)
3.2 Reading an External File
65(7)
3.2.1 Reading a Text File With read. table ()
65(3)
3.2.2 Displaying the Data Frame
68(2)
3.2.3 Converting and Recoding Variables
70(2)
3.3 Basic Statistics for Wide-Format Data
72(4)
3.3.1 Means, Variances, and Correlations
73(1)
3.3.2 Missing Data Statistics
74(1)
3.3.3 Conditioning on Static Predictors
75(1)
3.4 Reshaping Data
76(4)
3.4.1 Wide to Long Format
76(3)
3.4.2 Long to Wide Format
79(1)
3.5 Basic Statistics for Long-Format Data
80(4)
3.5.1 Means, Variances, and Correlations
80(2)
3.5.2 Missing Data Statistics
82(1)
3.5.3 Conditioning on Static Predictors
82(2)
3.6 Data Structures and Balance on Time
84(1)
3.7 Missing Data in LMER Analysis
85(4)
3.7.1 Retain or Omit Missing Data Rows?
88(1)
3.8 Missing Data Concepts
89(11)
3.8.1 Missing Completely at Random
90(1)
3.8.2 Missing at Random
91(1)
3.8.3 Not Missing at Random
92(1)
3.8.4 Missing Data Mechanisms and Statistical Analysis
93(1)
3.8.5 Missing Data Simulation
94(3)
3.8.6 LMER Analysis
97(3)
3.9 Extensions to More Complex Data Structures
100(5)
3.9.1 Multiple Dynamic Variables
100(2)
3.9.2 Unbalanced Data
102(3)
4 Graphing Longitudinal Data
105(42)
4.1 Graphing and Statistical Strategy
105(1)
4.2 Graphing With ggplot2
106(3)
4.2.1 Graph Components
107(1)
4.2.2 Layering
107(2)
4.3 Graphing Individual-Level Curves
109(13)
4.3.1 Superimposed Individual Curves
109(3)
4.3.2 Facet Plots of Individual Curves
112(1)
4.3.3 Selecting Subsets
113(2)
4.3.4 Graphing Fitted Curves
115(7)
4.4 Graphing Group-Level Curves
122(8)
4.4.1 Curve of the Means
123(3)
4.4.2 Graphing Fitted Curves
126(3)
4.4.3 Graphing Individual-Level and Group-Level Curves
129(1)
4.5 Conditioning on Static Predictors
130(13)
4.5.1 Categorical Static Predictors
132(7)
4.5.2 Quantitative Static Predictors
139(4)
4.6 Customizing Graphs
143(2)
4.6.1 Customizing Axes
143(1)
4.6.2 Customizing Facets
144(1)
4.6.3 Customizing the Legend
144(1)
4.7 Summary of ggplot2 Components
145(2)
5 Introduction to Linear Mixed Effects Regression
147(44)
5.1 Traditional Regression and the Linear Model
148(2)
5.2 Regression Examples
150(10)
5.2.1 Single Quantitative Predictor
150(4)
5.2.2 Analysis of Covariance
154(4)
5.2.3 Interaction Model
158(2)
5.3 Linear Mixed Effects Regression
160(10)
5.3.1 LMER as a Multilevel Model
163(4)
5.3.2 Random Effects as Errors
167(1)
5.3.3 Assumptions Regarding Random Effects and Random Error
168(1)
5.3.4 Random Effects and Correlated Observations
169(1)
5.4 Estimating the LMER Model
170(7)
5.4.1 Time as a Predictor
170(5)
5.4.2 Anchoring the Intercept
175(2)
5.5 LMER With Static Predictors
177(4)
5.5.1 Intercept Effects
177(1)
5.5.2 Slope and Intercept Effects
178(1)
5.5.3 Initial Status as a Static Predictor
179(1)
5.5.4 Extensions to More Complex Models
180(1)
5.5.5 Summary of lmer () Syntax
181(1)
5.6 Additional Details of LMER
181(10)
5.6.1 General Form of the LMER Model
182(2)
5.6.2 Variance-Covariance Matrix Among Repeated Measures
184(2)
5.6.3 Importance of Random Effects
186(1)
5.6.4 Working With Matrices in R
187(4)
6 Overview of Maximum Likelihood Estimation
191(36)
6.1 Conceptual Overview
192(2)
6.2 Maximum Likelihood and LM
194(18)
6.2.1 Several Unknown Parameters
202(2)
6.2.2 Exhaustive Search and Numerical Methods
204(3)
6.2.3 Restricted Maximum Likelihood
207(1)
6.2.4 Extracting the Log-Likelihood and the Deviance
208(1)
6.2.5 Comparing Models
208(4)
6.3 Maximum Likelihood and LMER
212(12)
6.3.1 LMER Deviance Function
214(2)
6.3.2 ML Standard Errors
216(4)
6.3.3 Additional SE Details
220(2)
6.3.4 Default lmer () Output
222(1)
6.3.5 Assumptions Regarding Missing Data
223(1)
6.4 Additional Details of ML for LMER
224(3)
7 Multimodel Inference and Akaike's Information Criterion
227(58)
7.1 Objects of Inference
228(4)
7.2 Statistical Strategy
232(3)
7.3 AIC and Predictive Accuracy
235(11)
7.3.1 Extension to LMER
243(2)
7.3.2 AIC Corrected
245(1)
7.4 AICc and Effect Size
246(8)
7.4.1 Delta
246(2)
7.4.2 Weight of Evidence
248(4)
7.4.3 Evidence Ratio
252(2)
7.5 AICc and Multimodel Inference
254(6)
7.5.1 Contrast With NHST
255(5)
7.6 Example of Multimodel Analysis
260(15)
7.6.1 Guidelines for Model Formulation
260(1)
7.6.2 Example Set of Models
261(3)
7.6.3 Bar Graphs of Results
264(1)
7.6.4 Interpretation of Global Results
265(3)
7.6.5 Details of Models
268(5)
7.6.6 Comments Regarding the Multimodel Approach
273(1)
7.6.7 Post Hoc Models
273(2)
7.7 Example Write-up
275(2)
7.8 Parametric Bootstrap of the Evidence Ratio
277(5)
7.8.1 Performing the Parametric Bootstrap
278(4)
7.8.2 Caveats Regarding the Parametric Bootstrap
282(1)
7.9 Bayesian Information Criterion
282(3)
8 Likelihood Ratio Test
285(36)
8.1 Why Use the Likelihood Ratio Test?
286(2)
8.2 Fisher and Neyman-Pearson
288(3)
8.3 Evaluation of Two Nested Models
291(10)
8.3.1 Calibrating p-Values Based on Predictive Accuracy
295(6)
8.4 Approaches to Testing Multiple Models
301(1)
8.5 Step-Up Approach
302(5)
8.5.1 Order of Testing
306(1)
8.5.2 Comments on the Step-Up Approach
307(1)
8.6 Top-Down Approach
307(3)
8.7 Comparison of Approaches
310(2)
8.8 Parametric Bootstrap
312(4)
8.8.1 Comments on the Parametric Bootstrap
315(1)
8.9 Planning a Study
316(5)
8.9.1 Comment on the Procedure
320(1)
9 Selecting Time Predictors
321(36)
9.1 Selection of Time Transformations
322(3)
9.2 Group-Level Selection of Time Transformations
325(1)
9.3 Multimodel Inference
326(7)
9.3.1 Analysis Without Static Predictors
327(2)
9.3.2 Analysis With Static Predictors
329(4)
9.4 Likelihood Ratio Test
333(2)
9.4.1 Analysis Without Static Predictors
333(2)
9.4.2 Analysis With Static Predictors
335(1)
9.5 Cautions Concerning Group-Level Selection
335(1)
9.6 Subject-Level Selection of Time Transformations
336(21)
9.6.1 Level 1 Polynomial Model
336(1)
9.6.2 Missing Data
337(1)
9.6.3 Subject-Level Fits
338(8)
9.6.4 Pooled Measures of Fit
346(4)
9.6.5 Clustering of Subject Curves
350(7)
10 Selecting Random Effects
357(48)
10.1 Automatic Selection of Random Effects
358(1)
10.2 Random Effects and Variance Components
359(12)
10.2.1 Restricted Maximum Likelihood
361(2)
10.2.2 Random Effects and Correlated Data
363(8)
10.3 Descriptive Methods
371(12)
10.3.1 OLS Estimates
372(4)
10.3.2 Examining Residuals
376(6)
10.3.3 Residuals and Normality
382(1)
10.4 Inferential Methods
383(11)
10.4.1 Likelihood Ratio Test
383(10)
10.4.2 AICc
393(1)
10.5 Variance Components and Static Predictors
394(1)
10.6 Predicted Random Effects
394(11)
10.6.1 Evaluating the Normality Assumption
397(2)
10.6.2 Predicted Values for an Individual
399(6)
11 Extending Linear Mixed Effects Regression
405(38)
11.1 Graphing Fitted Curves
405(4)
11.2 Static Predictors With Multiple Levels
409(11)
11.2.1 Evaluating Sets of Dummy Variables
415(1)
11.2.2 Evaluating Individual Dummy Variables
416(4)
11.3 Interactions Among Static Predictors
420(7)
11.3.1 Static Predictor Interactions With lmer ()
422(2)
11.3.2 Interpreting Interactions
424(2)
11.3.3 Nonlinear Static Predictor Effects
426(1)
11.4 Indexes of Absolute Effect Size in LMER
427(6)
11.4.1 Alternative Indexes
429(4)
11.5 Additional Transformations
433(10)
11.5.1 Time Units and Variances
434(3)
11.5.2 Transforming for Standardized Change
437(2)
11.5.3 Standardizing and Compositing
439(4)
12 Modeling Nonlinear Change
443(46)
12.1 Data Set and Analysis Strategy
444(4)
12.2 Global Versus Local Models
448(1)
12.3 Polynomials
449(11)
12.3.1 Mean-Corrected Polynomials
454(1)
12.3.2 Orthogonal Polynomials
454(1)
12.3.3 The poly () Function
455(4)
12.3.4 Polynomial Example
459(1)
12.4 Alternatives to Polynomials
460(1)
12.5 Trigonometric Functions
461(5)
12.6 Fractional Polynomials
466(12)
12.6.1 First-Order Fractional Polynomials
467(4)
12.6.2 Second-Order Fractional Polynomials
471(3)
12.6.3 Static Predictors
474(1)
12.6.4 Caveats Regarding the Use of Fractional Polynomials
475(3)
12.7 Spline Models
478(8)
12.7.1 Linear Spline Models
479(6)
12.7.2 Higher Order Regression Splines
485(1)
12.8 Additional Details
486(3)
12.8.1 Computing Orthogonal Polynomials
486(2)
12.8.2 General Form of Fractional Polynomials
488(1)
13 Advanced Topics
489(26)
13.1 Dynamic Predictors
490(10)
13.1.1 Dynamic Predictor as a Single Effect
493(3)
13.1.2 Dynamic Predictor With a Time Variable
496(4)
13.2 Multiple Response Variables
500(7)
13.2.1 Reading and Mathematics
500(1)
13.2.2 Analyzing Two Responses With lmer ()
501(6)
13.3 Additional Levels of Nesting
507(8)
13.3.1 Three-Level Model
508(5)
13.3.2 Static Predictors in Three-Level Models
513(2)
Appendix: Soft Introduction to Matrix Algebra
515(10)
A.1 Matrices
515(2)
A.2 Transpose
517(1)
A.3 Matrix Addition
518(1)
A.4 Multiplication of a Matrix by a Scalar
518(1)
A.5 Matrix Multiplication
518(2)
A.6 Determinant
520(1)
A.7 Inverse
521(2)
A.8 Matrix Algebra and R Functions
523(2)
References 525(10)
Author Index 535(4)
Subject Index 539
Jeffrey D. Long, PhD, is Professor of Psychiatry in the Carver College of Medicine at the University of Iowa. He is also the Head Statistician for Neurobiological Predictors of Huntingtons Disease (PREDICT-HD), a longitudinal NIH-funded study of early detection of Huntingtons Disease. His undergraduate degree is from the University of California at Los Angeles, and his doctoral degree is from the University of Southern California in the area of quantitative psychology.