Muutke küpsiste eelistusi

Bayesian Structural Equation Modeling [Kõva köide]

(University of California, United States)
  • Formaat: Hardback, 521 pages, kõrgus x laius: 254x178 mm, kaal: 1109 g
  • Ilmumisaeg: 26-Oct-2021
  • Kirjastus: Guilford Press
  • ISBN-10: 1462547745
  • ISBN-13: 9781462547746
Teised raamatud teemal:
  • Formaat: Hardback, 521 pages, kõrgus x laius: 254x178 mm, kaal: 1109 g
  • Ilmumisaeg: 26-Oct-2021
  • Kirjastus: Guilford Press
  • ISBN-10: 1462547745
  • ISBN-13: 9781462547746
Teised raamatud teemal:
"This book is meant as a guide for implementing Bayesian methods for latent variable models. I have included thorough examples in each chapter, highlighting problems that can arise during estimation, potential solutions, and guides for how to write up findings for a journal article. This book is structured into 12 main chapters, beginning with introductory chapters comprising Part I. Part II is comprised of Chapters 3-5. Each of these chapters deals with various models and techniques related to measurement models within SEM. Part III contains Chapters 6-7, on extending the structural model. Part IV contains Chapters 8-10, on longitudinal and mixture models. Finally, Part IV contains chapters that discuss special topics"--

"This book offers researchers a systematic and accessible introduction to using a Bayesian framework in structural equation modeling (SEM). Stand-alone chapters on each SEM model clearly explain the Bayesian form of the model and walk the reader through implementation. Engaging worked-through examples from diverse social science subfields illustrate the various modeling techniques, highlighting statistical or estimation problems that are likely to arise and describing potential solutions. For each model,instructions are provided for writing up findings for publication, including annotated sample data analysis plans and results sections. Other user-friendly features in every chapter include "Major Take-Home Points," notation glossaries, annotated suggestions for further reading, and excerpts of annotated code in both Mplus and R. The companion website supplies datasets, code, and output for all of the book's examples. "--

This book offers researchers a systematic and accessible introduction to using a Bayesian framework in structural equation modeling (SEM). Stand-alone chapters on each SEM model clearly explain the Bayesian form of the model and walk the reader through implementation. Engaging worked-through examples from diverse social science subfields illustrate the various modeling techniques, highlighting statistical or estimation problems that are likely to arise and describing potential solutions. For each model, instructions are provided for writing up findings for publication, including annotated sample data analysis plans and results sections. Other user-friendly features in every chapter include "Major Take-Home Points," notation glossaries, annotated suggestions for further reading, and sample code in both Mplus and R. The companion website supplies datasets; annotated code for implementation in both Mplus and R, so that users can work within their preferred platform; and output for all of the book&;s examples.

Arvustused

"The structure of each chapter is extremely well thought-out and facilitates understanding. A brief introduction to each topic is followed by an in-depth discussion, an example, and hypothetical results and discussion. The section about how to write up findings for each SEM analysis will be extremely helpful to readers; this is something that instructors are typically left to try to come up with on their own. I would absolutely consider using this book for a class on Bayesian SEM--or a lecture on the topic in a broader SEM course--as well as for my own professional use as a reference guide."--Katerina Marcoulides, PhD, Department of Psychology, University of Minnesota Twin Cities

"Depaoli has created a book that will quickly have a positive impact on researchers and students looking to expand their analytic capabilities. The text's design and writing style will engage readers with different levels of familiarity with Bayesian analysis and SEM. Instructors can flexibly change the level and amount of technical and mathematical information for different courses. I will add this text to my course to replace the hodgepodge of documents, website links, and articles needed for comprehension and usage of Bayesian SEM."--James B. Schreiber, PhD, School of Nursing, Duquesne University

"Researchers interested in applying Bayesian SEM in the social sciences will benefit from reading this book or taking a course based on it. Each chapter is well organized; the introduction sections are particularly useful. All methods are illustrated by code, which is an important step toward implementing the methods and applying them to real problems."--Peng Ding, PhD, Department of Statistics, University of California, Berkeley

"This book is a 'must read' for anyone who wants to do or review Bayesian SEM. It is structured well for the advanced graduate student and moderately versed researcher. The chapters are highly readable, and I really appreciate the annotated bibliography of select resources, which will be a great help to students and faculty."--Michael D. Toland, PhD, Executive Director, The Herb Innovation Center, Judith Herb College of Education, University of Toledo-

Part I Introduction
1 Background
3(23)
1.1 Bayesian Statistical Modeling: The Frequency of Use
3(3)
1.2 The Key Impediments within Bayesian Statistics
6(3)
1.3 Benefits of Bayesian Statistics within SEM
9(3)
1.3.1 A Recap: Why Bayesian SEM?
12(1)
1.4 Mastering the SEM Basics: Precursors to Bayesian SEM
12(8)
1.4.1 The Fundamentals of SEM Diagrams and Terminology
13(4)
1.4.2 LISREL Notation
17(2)
1.4.3 Additional Comments about Notation
19(1)
1.5 Datasets Used in the
Chapter Examples
20(6)
1.5.1 Cynicism Data
21(1)
1.5.2 Early Childhood Longitudinal Survey-Kindergarten Class
21(1)
1.5.3 Holzinger and Swineford (1939)
21(1)
1.5.4 IPIP 50: Big Five Questionnaire
22(1)
1.5.5 Lakaev Academic Stress Response Scale
23(1)
1.5.6 Political Democracy
23(1)
1.5.7 Program for International Student Assessment
24(1)
1.5.8 Youth Risk Behavior Survey
25(1)
2 Basic Elements of Bayesian Statistics
26(63)
2.1 A Brief Introduction to Bayesian Statistics
26(1)
2.2 Setting the Stage
27(2)
2.3 Comparing Frequentist and Bayesian Estimation
29(2)
2.4 The Bayesian Research Circle
31(1)
2.5 Bayes' Rule
32(2)
2.6 Prior Distributions
34(9)
2.6.1 The Normal Prior
35(1)
2.6.2 The Uniform Prior
35(1)
2.6.3 The Inverse Gamma Prior
35(1)
2.6.4 The Gamma Prior
36(1)
2.6.5 The Inverse Wishart Prior
36(1)
2.6.6 The Wishart Prior
36(1)
2.6.7 The Beta Prior
37(1)
2.6.8 The Dirichlet Prior
37(1)
2.6.9 Different Levels of Informativeness for Prior Distributions
38(1)
2.6.10 Prior Elicitation
39(3)
2.6.11 Prior Predictive Checking
42(1)
2.7 The Likelihood (Frequentist and Bayesian Perspectives)
43(2)
2.8 The Posterior
45(10)
2.8.1 An Introduction to Markov Chain Monte Carlo Methods
45(2)
2.8.2 Sampling Algorithms
47(5)
2.8.3 Convergence
52(1)
2.8.4 MCMC Bum-In Phase
53(1)
2.8.5 The Number of Markov Chains
53(1)
2.8.6 A Note about Starting Values
54(1)
2.8.7 Thinning a Chain
54(1)
2.9 Posterior Inference
55(7)
2.9.1 Posterior Summary Statistics
55(1)
2.9.2 Intervals
56(1)
2.9.3 Effective Sample Size
56(1)
2.9.4 Trace-Plots
57(1)
2.9.5 Autocorrelation Plots
57(1)
2.9.6 Posterior Histogram and Density Plots
57(1)
2.9.7 HDI Histogram and Density Plots
57(1)
2.9.8 Model Assessment
58(1)
2.9.9 Sensitivity Analysis
58(4)
2.10 A Simple Example
62(9)
2.11
Chapter Summary
71(5)
2.11.1 Major Take-Home Points
71(2)
2.11.2 Notation Referenced
73(2)
2.11.3 Annotated Bibliography of Select Resources
75(1)
Appendix 2.A Getting Started with R
76(13)
Part II Measurement Models and Related Issues
3 The Confirmatory Factor Analysis Model
89(49)
3.1 Introduction to Bayesian CFA
89(2)
3.2 The Model and Notation
91(5)
3.2.1 Handling Indeterminacies in CFA
93(3)
3.3 The Bayesian Form of the CFA Model
96(5)
3.3.1 Additional Information about the (Inverse) Wishart Prior
97(3)
3.3.2 Alternative Priors for Covariance Matrices
100(1)
3.3.3 Alternative Priors for Variances
100(1)
3.3.4 Alternative Priors for Factor Loadings
101(1)
3.4 Example 1: Basic CFA Model
101(19)
3.5 Example 2: Implementing Near-Zero Priors for Cross-Loadings
120(4)
3.6 How to Write Up Bayesian CFA Results
124(4)
3.6.1 Hypothetical Data Analysis Plan
125(1)
3.6.2 Hypothetical Results Section
125(2)
3.6.3 Discussion Points Relevant to the Analysis
127(1)
3.7
Chapter Summary
128(10)
3.7.1 Major Take-Home Points
128(3)
3.7.2 Notation Referenced
131(1)
3.7.3 Annotated Bibliography of Select Resources
132(1)
3.7.4 Example Code for Mplus
133(3)
3.7.5 Example Code for R
136(2)
4 Multiple-Group Models
138(31)
4.1 A Brief Introduction to Multiple-Group Models
138(1)
4.2 Introduction to the Multiple-Group CFA Model (with Mean Differences)
139(1)
4.3 The Model and Notation
140(2)
4.4 The Bayesian Form of the Multiple-Group CFA Model
142(2)
4.5 Example 1: Using a Mean-Difference, Multiple-Group CFA Model to Assess for School Differences
144(9)
4.6 Introduction to the MIMIC Model
153(1)
4.7 The Model and Notation
153(1)
4.8 The Bayesian Form of the MIMIC Model
154(2)
4.9 Example 2: Using the MIMIC Model to Assess for School Differences
156(2)
4.10 How to Write Up Bayesian Multiple-Group Model Results with Mean Differences
158(3)
4.10.1 Hypothetical Data Analysis Plan
158(1)
4.10.2 Hypothetical Results Section
159(1)
4.10.3 Discussion Points Relevant to the Analysis
160(1)
4.11
Chapter Summary
161(8)
4.11.1 Major Take-Home Points
162(1)
4.11.2 Notation Referenced
163(2)
4.11.3 Annotated Bibliography of Select Resources
165(1)
4.11.4 Example Code for Mplus
166(1)
4.11.5 Example Code for R
167(2)
5 Measurement Invariance Testing
169(30)
5.1 A Brief Introduction to MI in SEM
169(4)
5.1.1 Stages of Traditional MI Testing
170(2)
5.1.2 Challenges within Traditional MI Testing
172(1)
5.2 Bayesian Approximate MI
173(1)
5.3 The Model and Notation
174(2)
5.4 Priors within Bayesian Approximate MI
176(2)
5.5 Example: Illustrating Bayesian Approximate MI for School Differences
178(8)
5.5.1 Results for the Conventional MI Tests
181(1)
5.5.2 Results for the Bayesian Approximate MI Tests
182(2)
5.5.3 Results Comparing Latent Means across Approaches
184(2)
5.6 How to Write Up Bayesian Approximate MI Results
186(4)
5.6.1 Hypothetical Data Analysis Plan
187(1)
5.6.2 Hypothetical Analytic Procedure
188(1)
5.6.3 Hypothetical Results Section
189(1)
5.6.4 Discussion Points Relevant to the Analysis
190(1)
5.7
Chapter Summary
190(9)
5.7.1 Major Take-Home Points
190(2)
5.7.2 Notation Referenced
192(1)
5.7.3 Annotated Bibliography of Select Resources
193(1)
5.7.4 Example Code for Mplus
194(1)
5.7.5 Example Code for R
195(4)
Part III Extending the Structural Model
6 The General Structural Equation Model
199(29)
6.1 Introduction to Bayesian SEM
199(2)
6.2 The Model and Notation
201(2)
6.3 The Bayesian Form of SEM
203(1)
6.4 Example: Revisiting Bollen's (1989) Political Democracy Example
204(9)
6.4.1 Motivation for This Example
205(1)
6.4.2 The Current Example
206(7)
6.5 How to Write Up Bayesian SEM Results
213(3)
6.5.1 Hypothetical Data Analysis Plan
213(1)
6.5.2 Hypothetical Results Section
214(1)
6.5.3 Discussion Points Relevant to the Analysis
215(1)
6.6
Chapter Summary
216(8)
6.6.1 Major Take-Home Points
217(2)
6.6.2 Notation Referenced
219(2)
6.6.3 Annotated Bibliography of Select Resources
221(1)
6.6.4 Example Code for Mplus
222(1)
6.6.5 Example Code for R
223(1)
Appendix 6.A Causal Inference and Mediation Analysis
224(4)
7 Multilevel Structural Equation Modeling
228(47)
7.1 Introduction to MSEM
228(5)
7.1.1 MSEM Applications
230(2)
7.1.2 Contextual Effects
232(1)
7.2 Extending MSEM into the Bayesian Context
233(2)
7.3 The Model and Notation
235(3)
7.4 The Bayesian Form of MSEM
238(5)
7.5 Example 1: A Two-Level CFA with Continuous Items
243(4)
7.5.1 Implementation of Example 1
244(2)
7.5.2 Example 1 Results
246(1)
7.6 Example 2: A Three-Level CFA with Categorical Items
247(11)
7.6.1 Implementation of Example 2
253(1)
7.6.2 Example 2 Results
253(5)
7.7 How to Write Up Bayesian MSEM Results
258(3)
7.7.1 Hypothetical Data Analysis Plan
258(1)
7.7.2 Hypothetical Results Section
259(1)
7.7.3 Discussion Points Relevant to the Analysis
260(1)
7.8
Chapter Summary
261(14)
7.8.1 Major Take-Home Points
262(2)
7.8.2 Notation Referenced
264(3)
7.8.3 Annotated Bibliography of Select Resources
267(1)
7.8.4 Example Code for Mplus
268(1)
7.8.5 Example Code for R
268(7)
Part IV Longitudinal and Mixture Models
8 The Latent Growth Curve Model
275(33)
8.1 Introduction to Bayesian LGCM
275(1)
8.2 The Model and Notation
276(4)
8.2.1 Extensions of the LGCM
279(1)
8.3 The Bayesian Form of the LGCM
280(3)
8.3.1 Alternative Priors for the Factor Variances and Covariances
281(2)
8.4 Example 1: Bayesian Estimation of the LGCM Using ECLS--K Reading Data
283(4)
8.5 Example 2: Extending the Example to Include Separation Strategy Priors
287(4)
8.6 Example 3: Extending the Framework to Assessing MI over Time
291(6)
8.7 How to Write Up Bayesian LGCM Results
297(2)
8.7.1 Hypothetical Data Analysis Plan
297(1)
8.7.2 Hypothetical Results Section
298(1)
8.7.3 Discussion Points Relevant to the Analysis
299(1)
8.8
Chapter Summary
299(9)
8.8.1 Major Take-Home Points
300(2)
8.8.2 Notation Referenced
302(2)
8.8.3 Annotated Bibliography of Select Resources
304(1)
8.8.4 Example Code for Mplus
305(1)
8.8.5 Example Code for R
305(3)
9 The Latent Class Model
308(46)
9.1 A Brief Introduction to Mixture Models
308(1)
9.2 Introduction to Bayesian LCA
309(1)
9.3 The Model and Notation
310(3)
9.3.1 Introducing the Issue of Class Separation
312(1)
9.4 The Bayesian Form of the LCA Model
313(2)
9.4.1 Adding Flexibility to the LCA Model
314(1)
9.5 Mixture Models, Label Switching, and Possible Solutions
315(6)
9.5.1 Identifiability Constraints
319(1)
9.5.2 Relabeling Algorithms
320(1)
9.5.3 Label Invariant Loss Functions
321(1)
9.5.4 Final Thoughts on Label Switching
321(1)
9.6 Example: A Demonstration of Bayesian LCA
321(19)
9.6.1 Motivation for This Example
322(2)
9.6.2 The Current Example
324(16)
9.7 How to Write Up Bayesian LCA Results
340(4)
9.7.1 Hypothetical Data Analysis Plan
340(1)
9.7.2 Hypothetical Results Section
341(2)
9.7.3 Discussion Points Relevant to the Analysis
343(1)
9.8
Chapter Summary
344(10)
9.8.1 Major Take-Home Points
344(2)
9.8.2 Notation Referenced
346(1)
9.8.3 Annotated Bibliography of Select Resources
347(1)
9.8.4 Example Code for Mplus
348(4)
9.8.5 Example Code for R
352(2)
10 The Latent Growth Mixture Model
354(39)
10.1 Introduction to Bayesian LGMM
354(2)
10.2 The Model and Notation
356(7)
10.2.1 Concerns with Class Separation
359(4)
10.3 The Bayesian Form of the LGMM
363(3)
10.3.1 Alternative Priors for Factor Means
365(1)
10.3.2 Alternative Priors for the Measurement Error Covariance Matrix
365(1)
10.3.3 Alternative Priors for the Factor Covariance Matrix
365(1)
10.3.4 Handling Label Switching in LGMMs
365(1)
10.4 Example: Comparing Different Prior Conditions in an LGMM
366(12)
10.5 How to Write Up Bayesian LGMM Results
378(3)
10.5.1 Hypothetical Data Analysis Plan
378(1)
10.5.2 Hypothetical Results Section
379(2)
10.5.3 Discussion Points Relevant to the Analysis
381(1)
10.6
Chapter Summary
381(12)
10.6.1 Major Take-Home Points
382(2)
10.6.2 Notation Referenced
384(2)
10.6.3 Annotated Bibliography of Select Resources
386(1)
10.6.4 Example Code for Mplus
387(1)
10.6.5 Example Code for R
387(6)
Part V Special Topics
11 Model Assessment
393(41)
11.1 Model Comparison and Cross-Validation
395(9)
11.1.1 Bayes Factors
395(3)
11.1.2 The Bayesian Information Criterion
398(2)
11.1.3 The Deviance Information Criterion
400(2)
11.1.4 The Widely Applicable Information Criterion
402(1)
11.1.5 Leave-One-Out Cross-Validation
403(1)
11.2 Model Fit
404(7)
11.2.1 Posterior Predictive Model Checking
404(5)
11.2.2 Missing Data and the PPC Procedure
409(1)
11.2.3 Testing Near-Zero Parameters through the PPPP
410(1)
11.3 Bayesian Approximate Fit
411(5)
11.3.1 Bayesian Root Mean Square Error of Approximation
412(1)
11.3.2 Bayesian Tucker-Lewis Index
413(1)
11.3.3 Bayesian Normed Fit Index
414(1)
11.3.4 Bayesian Comparative Fit Index
414(1)
11.3.5 Implementation of These Indices
415(1)
11.4 Example 1: Illustrating the PPC and the PPPP for CFA
416(3)
11.5 Example 2: Illustrating Bayesian Approximate Fit for CFA
419(3)
11.6 How to Write Up Bayesian Approximate Fit Results
422(3)
11.6.1 Hypothetical Data Analysis Plan
422(1)
11.6.2 Hypothetical Results Section
423(2)
11.6.3 Discussion Points Relevant to the Analysis
425(1)
11.7
Chapter Summary
425(9)
11.7.1 Major Take-Home Points
425(2)
11.7.2 Notation Referenced
427(4)
11.7.3 Annotated Bibliography of Select Resources
431(1)
11.7.4 Example Code for Mplus
432(1)
11.7.5 Example Code for R
432(2)
12 Important Points to Consider
434(39)
12.1 Implementation and Reporting of Bayesian Results
434(2)
12.1.1 Priors Implemented
435(1)
12.1.2 Convergence
435(1)
12.1.3 Sensitivity Analysis
435(1)
12.1.4 How Should We Interpret These Findings?
436(1)
12.2 Points to Check Prior to Data Analysis
436(7)
12.2.1 Is Your Model Formulated "Correctly"?
436(4)
12.2.2 Do You Understand the Priors?
440(3)
12.3 Points to Check after Initial Data Analysis, but before Interpretation of Results
443(13)
12.3.1 Convergence
443(5)
12.3.2 Does Convergence Remain after Doubling the Number of Iterations?
448(2)
12.3.3 Is There Ample Information in the Posterior Histogram?
450(2)
12.3.4 Is There a Strong Degree of Autocorrelation in the Posterior?
452(3)
12.3.5 Does the Posterior Make Substantive Sense?
455(1)
12.4 Understanding the Influence of Priors
456(6)
12.4.1 Examining the Influence of Priors on Multivariate Parameters (e.g., Covariance Matrices)
457(3)
12.4.2 Comparing the Original Prior to Other Diffuse or Subjective Priors
460(2)
12.5 Incorporating Model Fit or Model Comparison
462(1)
12.6 Interpreting Model Results the "Bayesian Way"
463(1)
12.7 How to Write Up Bayesian Results
464(5)
12.7.1 (Hypothetical) Results for Bayesian Two-Factor CFA
465(4)
12.8 How to Review Bayesian Work
469(1)
12.9
Chapter Summary and Looking Forward
470(3)
Glossary 473(9)
References 482(17)
Author Index 499(5)
Subject Index 504(17)
About the Author 521
Sarah Depaoli, PhD, is Associate Professor of Quantitative Methods, Measurement, and Statistics in the Department of Psychological Sciences at the University of California, Merced, where she teaches undergraduate statistics and a variety of graduate courses in quantitative methods. Her research interests include examining different facets of Bayesian estimation for latent variable, growth, and finite mixture models. She has a continued interest in the influence of prior distributions and robustness of results under different prior specifications, as well as issues tied to latent class separation. Her recent research has focused on using Bayesian semi- and non-parametric methods for obtaining proper class enumeration and assignment, examining parameterization issues within Bayesian SEM, and studying the impact of priors on longitudinal models.