Muutke küpsiste eelistusi

E-raamat: Total Survey Error in Practice

Edited by (Utrecht University, The Netherlands), Edited by , Edited by , Edited by (Statistics Sweden), Edited by (US Department of Labor), Edited by (University of Mannheim), Edited by (University of Michigan and Joint Program in Survey Methodology), Edited by (Research Triangle Institute)
  • Formaat - PDF+DRM
  • Hind: 118,50 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Featuring a timely presentation of total survey error (TSE), this edited volume introduces valuable tools for understanding and improving survey data quality in the context of evolving large-scale data sets

This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple error sources, such as sampling error, measurement error, and nonresponse error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total error.

This book:

Features various error sources, and the complex relationships between them, in 25 high-quality chapters on the most up-to-date research in the field of TSE

Provides comprehensive reviews of the literature on error sources as well as data collection approaches and estimation methods to reduce their effects

Presents examples of recent international events that demonstrate the effects of data error, the importance of survey data quality, and the real-world issues that arise from these errors

Spans the four pillars of the total survey error paradigm (design, data collection, evaluation and analysis) to address key data quality issues in official statistics and survey research

Total Survey Error in Practice is a reference for survey researchers and data scientists in research areas that include social science, public opinion, public policy, and business. It can also be used as a textbook or supplementary material for a graduate-level course in survey research methods.
Notes on Contributors xix
Section 1: The Concept of TSE and the TSE Paradigm 1(94)
1 The Roots and Evolution of the Total Survey Error Concept
3(20)
Lars E. Lyberg
Diana Maria Stukel
1.1 Introduction and Historical Backdrop
3(2)
1.2 Specific Error Sources and Their Control or Evaluation
5(5)
1.3 Survey Models and Total Survey Design
10(2)
1.4 The Advent of More Systematic Approaches Toward Survey Quality
12(4)
1.5 What the Future Will Bring
16(2)
References
18(5)
2 Total Twitter Error: Decomposing Public Opinion Measurement on Twitter from a Total Survey Error Perspective
23(24)
Yuli Patrick Hsieh
Joe Murphy
2.1 Introduction
23(2)
2.1.1 Social Media: A Potential Alternative to Surveys?
23(1)
2.1.2 TSE as a Launching Point for Evaluating Social Media Error
24(1)
2.2 Social Media: An Evolving Online Public Sphere
25(2)
2.2.1 Nature, Norms, and Usage Behaviors of Twitter
25(1)
2.2.2 Research on Public Opinion on Twitter
26(1)
2.3 Components of Twitter Error
27(4)
2.3.1 Coverage Error
28(1)
2.3.2 Query Error
28(1)
2.3.3 Interpretation Error
29(1)
2.3.4 The Deviation of Unstructured Data Errors from TSE
30(1)
2.4 Studying Public Opinion on the Twittersphere and the Potential Error Sources of Twitter Data: Two Case Studies
31(9)
2.4.1 Research Questions and Methodology of Twitter Data Analysis
32(1)
2.4.2 Potential Coverage Error in Twitter Examples
33(3)
2.4.3 Potential Query Error in Twitter Examples
36(1)
2.4.3.1 Implications of Including or Excluding RTs for Error
36(1)
2.4.3.2 Implications of Query Iterations for Error
37(2)
2.4.4 Potential Interpretation Error in Twitter Examples
39(1)
2.5 Discussion
40(2)
2.5.1 A Framework That Better Describes Twitter Data Errors
40(1)
2.5.2 Other Subclasses of Errors to Be Investigated
41(1)
2.6 Conclusion
42(1)
2.6.1 What Advice We Offer for Researchers and Research Consumers
42(1)
2.6.2 Directions for Future Research
42(1)
References
43(4)
3 Big Data: A Survey Research Perspective
47(24)
Reg Baker
3.1 Introduction
47(1)
3.2 Definitions
48(8)
3.2.1 Sources
49(1)
3.2.2 Attributes
49(1)
3.2.2.1 Volume
50(1)
3.2.2.2 Variety
50(1)
3.2.2.3 Velocity
50(1)
3.2.2.4 Veracity
50(1)
3.2.2.5 Variability
52(1)
3.2.2.6 Value
52(1)
3.2.2.7 Visualization
52(1)
3.2.3 The Making of Big Data
52(4)
3.3 The Analytic Challenge: From Database Marketing to Big Data and Data Science
56(2)
3.4 Assessing Data Quality
58(1)
3.4.1 Validity
58(1)
3.4.2 Missingness
59(1)
3.4.3 Representation
59(1)
3.5 Applications in Market, Opinion, and Social Research
59(3)
3.5.1 Adding Value through Linkage
60(1)
3.5.2 Combining Big Data and Surveys in Market Research
61(1)
3.6 The Ethics of Research Using Big Data
62(1)
3.7 The Future of Surveys in a Data-Rich Environment
62(3)
References
65(6)
4 The Role of Statistical Disclosure Limitation in Total Survey Error
71(24)
Alan F. Karr
4.1 Introduction
71(1)
4.2 Primer on SDL
72(3)
4.3 TSE-Aware SDL
75(4)
4.3.1 Additive Noise
75(3)
4.3.2 Data Swapping
78(1)
4.4 Edit-Respecting SDL
79(4)
4.4.1 Simulation Experiment
80(2)
4.4.2 A Deeper Issue
82(1)
4.5 SDL-Aware TSE
83(1)
4.6 Full Unification of Edit, Imputation, and SDL
84(3)
4.7 "Big Data" Issues
87(2)
4.8 Conclusion
89(2)
Acknowledgments
91(1)
References
92(3)
Section 2: Implications for Survey Design 95(158)
5 The Undercoverage-Nonresponse Tradeoff
97(18)
Stephanie Eckman
Frauke Kreuter
5.1 Introduction
97(1)
5.2 Examples of the Tradeoff
98(1)
5.3 Simple Demonstration of the Tradeoff
99(1)
5.4 Coverage and Response Propensities and Bias
100(2)
5.5 Simulation Study of Rates and Bias
102(8)
5.5.1 Simulation Setup
102(3)
5.5.2 Results for Coverage and Response Rates
105(1)
5.5.3 Results for Undercoverage and Nonresponse Bias
106(1)
5.5.3 .1 Scenario 1
107(1)
5.5.3.2 Scenario 2
108(1)
5.5.3.3 Scenario 3
108(1)
5.5.3.4 Scenario 4
109(1)
5.5.3.5 Scenario 7
109(1)
5.5.4 Summary of Simulation Results
110(1)
5.6 Costs
110(1)
5.7 Lessons for Survey Practice
111(1)
References
112(3)
6 Mixing Modes: Tradeoffs Among Coverage, Nonresponse, and Measurement Error
115(18)
Roger Tourangeau
6.1 Introduction
115(3)
6.2 The Effect of Offering a Choice of Modes
118(1)
6.3 Getting People to Respond Online
119(1)
6.4 Sequencing Different Modes of Data Collection
120(2)
6.5 Separating the Effects of Mode on Selection and Reporting
122(5)
6.5.1 Conceptualizing Mode Effects
122(1)
6.5.2 Separating Observation from Nonobservation Error
123(1)
6.5.2.1 Direct Assessment of Measurement Errors
123(1)
6.5.2.2 Statistical Adjustments
124(1)
6.5.2.3 Modeling Measurement Error
126(1)
6.6 Maximizing Comparability Versus Minimizing Error
127(2)
6.7 Conclusions
129(1)
References
130(3)
7 Mobile Web Surveys: A Total Survey Error Perspective
133(22)
Mick P. Couper
Christopher Antoun
Aigul Mavletova
7.1 Introduction
133(2)
7.2 Coverage
135(2)
7.3 Nonresponse
137(5)
7.3.1 Unit Nonresponse
137(2)
7.3.2 Breakoffs
139(1)
7.3.3 Completion Times
140(1)
7.3.4 Compliance with Special Requests
141(1)
7.4 Measurement Error
142(6)
7.4.1 Grouping of Questions
143(1)
7.4.1.1 Question-Order Effects
143(1)
7.4.1.2 Number of Items on a Page
143(1)
7.4.1.3 Grids versus Item-By-Item
143(2)
7.4.2 Effects of Question Type
145(1)
7.4.2.1 Socially Undesirable Questions
145(1)
7.4.2.2 Open-Ended Questions
146(1)
7.4.3 Response and Scale Effects
146(1)
7.4.3.1 Primacy Effects
146(1)
7.4.3.2 Slider Bars and Drop-Down Questions
147(1)
7.4.3.3 Scale Orientation
147(1)
7.4.4 Item Missing Data
148(1)
7.5 Links Between Different Error Sources
148(1)
7.6 The Future of Mobile Web Surveys
149(1)
References
150(5)
8 The Effects of a Mid-Data Collection Change in Financial Incentives on Total Survey Error in the National Survey of Family Growth: Results from a Randomized Experiment
155(24)
James Wagner
Brady T. West
Heidi Guyer
Paul Burton
Jennifer Kelley
Mick P. Couper
William D. Mosher
8.1 Introduction
155(1)
8.2 Literature Review: Incentives in Face-to-Face Surveys
156(3)
8.2.1 Nonresponse Rates
156(1)
8.2.2 Nonresponse Bias
157(1)
8.2.3 Measurement Error
158(1)
8.2.4 Survey Costs
159(1)
8.2.5 Summary
159(1)
8.3 Data and Methods
159(4)
8.3.1 NSFG Design: Overview
159(2)
8.3.2 Design of Incentive Experiment
161(1)
8.3.3 Variables
161(1)
8.3.4 Statistical Analysis
162(1)
8.4 Results
163(10)
8.4.1 Nonresponse Error
163(3)
8.4.2 Sampling Error and Costs
166(4)
8.4.3 Measurement Error
170(3)
8.5 Conclusion
173(2)
8.5.1 Summary
173(1)
8.5.2 Recommendations for Practice
174(1)
References
175(4)
9 A Total Survey Error Perspective on Surveys in Multinational, Multiregional, and Multicultural Contexts
179(24)
Beth-Ellen Pennell
Kristen Cibelli Hibben
Lars E. Lyberg
Peter Ph. Mohler
Gelaye Worku
9.1 Introduction
179(1)
9.2 TSE in Multinational, Multiregional, and Multicultural Surveys
180(4)
9.3 Challenges Related to Representation and Measurement Error Components in Comparative Surveys
184(8)
9.3.1 Representation Error
184(1)
9.3.1.1 Coverage Error
184(1)
9.3.1.2 Sampling Error
185(1)
9.3.1.3 Unit Nonresponse Error
186(1)
9.3.1.4 Adjustment Error
187(1)
9.3.2 Measurement Error
187(1)
9.3.2.1 Validity
188(1)
9.3.2.2 Measurement Error-The Response Process
188(1)
9.3.2.3 Processing Error
191(1)
9.4 QA and QC in 3MC Surveys
192(4)
9.4.1 The Importance of a Solid Infrastructure
192(1)
9.4.2 Examples of QA and QC Approaches Practiced Some 3MC Surveys
193(2)
9.4.3 QA/QC Recommendations
195(1)
References
196(7)
10 Smartphone Participation in Web Surveys: Choosing Between the Potential for Coverage, Nonresponse, and Measurement Error
203(32)
Gregg Peterson
Jamie Griffin
John LaFrance
JiaoJiao Li
10.1 Introduction
203(3)
10.1.1 Focus on Smartphones
204(1)
10.1.2 Smartphone Participation: Web-Survey Design Decision Tree
204(1)
10.1.3
Chapter Outline
205(1)
10.2 Prevalence of Smartphone Participation in Web Surveys
206(3)
10.3 Smartphone Participation Choices
209(3)
10.3.1 Disallowing Smartphone Participation
209(2)
10.3.2 Discouraging Smartphone Participation
211(1)
10.4 Instrument Design Choices
212(4)
10.4.1 Doing Nothing
213(1)
10.4.2 Optimizing for Smartphones
213(3)
10.5 Device and Design Treatment Choices
216(2)
10.5.1 PC/Legacy versus Smartphone Designs
216(1)
10.5.2 PC/Legacy versus PC/New
216(1)
10.5.3 Smartphone/Legacy versus Smartphone/New
217(1)
10.5.4 Device and Design Treatment Options
217(1)
10.6 Conclusion
218(1)
10.7 Future Challenges and Research Needs
219(1)
Appendix 10.A: Data Sources
220(1)
Appendix 10.B: Smartphone Prevalence in Web Surveys
221(4)
Appendix 10.C: Screen Captures from Peterson et al. (2013) Experiment
225(4)
Appendix 10.D: Survey Questions Used in the Analysis of the Peterson et al. (2013) Experiment
229(2)
References
231(4)
11 Survey Research and the Quality of Survey Data Among Ethnic Minorities
235(18)
Joost Kappelhof
11.1 Introduction
235(1)
11.2 On the Use of the Terms Ethnicity and Ethnic Minorities
236(1)
11.3 On the Representation of Ethnic Minorities in Surveys
237(5)
11.3.1 Coverage of Ethnic Minorities
238(1)
11.3.2 Factors Affecting Nonresponse Among Ethnic Minorities
239(2)
11.3.3 Postsurvey Adjustment Issues Related to Surveys Among Ethnic Minorities
241(1)
11.4 Measurement Issues
242(2)
11.4.1 The Tradeoff When Using Response-Enhancing Measures
243(1)
11.5 Comparability, Timeliness, and Cost Concerns
244(3)
11.5.1 Comparability
245(1)
11.5.2 Timeliness and Cost Considerations
246(1)
11.6 Conclusion
247(1)
References
248(5)
Section 3: Data Collection and Data Processing Applications 253(86)
12 Measurement Error in Survey Operations Management: Detection, Quantification, Visualization, and Reduction
255(24)
Brad Edwards
Aaron Maitland
Sue Connor
12.1 TSE Background on Survey Operations
256(1)
12.2 Better and Better: Using Behavior Coding (CARIcode) and Paradata to Evaluate and Improve Question (Specification) Error and Interviewer Error
257(4)
12.2.1 CARI Coding at Westat
259(1)
12.2.2 CARI Experiments
260(1)
12.3 Field-Centered Design: Mobile App for Rapid Reporting and Management
261(4)
12.3.1 Mobile App Case Study
262(2)
12.3.2 Paradata Quality
264(1)
12.4 Faster and Cheaper: Detecting Falsification With GIS Tools
265(3)
12.5 Putting It All Together: Field Supervisor Dashboards
268(5)
12.5.1 Dashboards in Operations
268(1)
12.5.2 Survey Research Dashboards
269(1)
12.5.2.1 Dashboards and Paradata
269(1)
12.5.2.2 Relationship to TSE
269(1)
12.5.3 The Stovepipe Problem
270(1)
12.5.4 The Dashboard Solution
270(1)
12.5.5 Case Study
270(1)
12.5.5.1 Single Sign-On
270(1)
12.5.5.2 Alerts
271(1)
12.5.5.3 General Dashboard Design
271(2)
12.6 Discussion
273(2)
References
275(4)
13 Total Survey Error for Longitudinal Surveys
279(20)
Peter Lynn
Peter J. Lugtig
13.1 Introduction
279(1)
13.2 Distinctive Aspects of Longitudinal Surveys
280(1)
13.3 TSE Components in Longitudinal Surveys
281(4)
13.4 Design of Longitudinal Surveys from a TSE Perspective
285(5)
13.4.1 Is the Panel Study Fixed-Time or Open-Ended?
286(1)
13.4.2 Who To Follow Over Time?
286(1)
13.4.3 Should the Survey Use Interviewers or Be Self-Administered?
287(1)
13.4.4 How Long Should Between-Wave Intervals Be?
288(1)
13.4.5 How Should Longitudinal Instruments Be Designed?
289(1)
13.5 Examples of Tradeoffs in Three Longitudinal Surveys
290(4)
13.5.1 Tradeoff between Coverage, Sampling and Nonresponse Error in LISS Panel
290(2)
13.5.2 Tradeoff between Nonresponse and Measurement Error in BHPS
292(1)
13.5.3 Tradeoff between Specification and Measurement Error in SIPP
293(1)
13.6 Discussion
294(1)
References
295(4)
14 Text Interviews on Mobile Devices
299(20)
Frederick G. Conrad
Michael F. Schober
Christopher Antoun
Andrew L. Hupp
H. Yanna Yan
14.1 Texting as a Way of Interacting
300(3)
14.1.1 Properties and Affordances
300(1)
14.1.1.1 Stable Properties
300(1)
14.1.1.2 Properties That Vary across Devices and Networks
301(2)
14.2 Contacting and Inviting Potential Respondents through Text
303(1)
14.3 Texting as an Interview Mode
303(9)
14.3.1 Coverage and Sampling Error
304(3)
14.3.2 Nonresponse Error
307(1)
14.3.3 Measurement Error: Conscientious Responding and Disclosure in Texting Interviews
308(2)
14.3.4 Measurement Error: Interface Design for Texting Interviews
310(2)
14.4 Costs and Efficiency of Text Interviewing
312(2)
14.5 Discussion
314(1)
References
315(4)
15 Quantifying Measurement Errors in Partially Edited Business Survey Data
319(20)
Thomas Laitila
Karin Lindgren
Anders Norberg
Can Tongur
15.1 Introduction
319(1)
15.2 Selective Editing
320(5)
15.2.1 Editing and Measurement Error
320(1)
15.2.2 Definition and the General Idea of Selective Editing
321(1)
15.2.3 SELEKT
322(1)
15.2.4 Experiences from Implementations of SELEKT
323(2)
15.3 Effects of Errors Remaining After SE
325(3)
15.3.1 Sampling Below the Threshold: The Two-Step Procedure
326(1)
15.3.2 Randomness of Measurement Errors
326(1)
15.3.3 Modeling and Estimation of Measurement Errors
327(1)
15.3.4 Output Editing
328(1)
15.4 Case Study: Foreign Trade in Goods Within the European Union
328(6)
15.4.1 Sampling Below the Cutoff Threshold for Editing
330(1)
15.4.2 Results
330(2)
15.4.3 Comments on Results
332(2)
15.5 Editing Big Data
334(1)
15.6 Conclusions
335(1)
References
335(4)
Section 4: Evaluation and Improvement 339(148)
16 Estimating Error Rates in an Administrative Register and Survey Questions Using a Latent Class Model
341(18)
Daniel L. Oberski
16.1 Introduction
341(1)
16.2 Administrative and Survey Measures of Neighborhood
342(3)
16.3 A Latent Class Model for Neighborhood of Residence
345(3)
16.4 Results
348(6)
16.4.1 Model Fit
348(2)
16.4.2 Error Rate Estimates
350(4)
16.5 Discussion and Conclusion
354(1)
Appendix 16.A: Program Input and Data
355(2)
Acknowledgments
357(1)
References
357(2)
17 ASPIRE: An Approach for Evaluating and Reducing the Total Error in Statistical Products with Application to Registers and the National Accounts
359(28)
Paul P. Biemer
Dennis Trewin
Heather Bergdahl
Yingfu Xie
17.1 Introduction and Background
359(1)
17.2 Overview of ASPIRE
360(2)
17.3 The ASPIRE Model
362(5)
17.3.1 Decomposition of the TSE into Component Error Sources
362(2)
17.3.2 Risk Classification
364(1)
17.3.3 Criteria for Assessing Quality
364(1)
17.3.4 Ratings System
365(2)
17.4 Evaluation of Registers
367(4)
17.4.1 Types of Registers
367(1)
17.4.2 Error Sources Associated with Registers
368(2)
17.4.3 Application of ASPIRE to the TPR
370(1)
17.5 National Accounts
371(5)
17.5.1 Error Sources Associated with the NA
372(2)
17.5.2 Application of ASPIRE to the Quarterly Swedish NA
374(2)
17.6 A Sensitivity Analysis of GDP Error Sources
376(3)
17.6.1 Analysis of Computer Programming, Consultancy, and Related Services
376(2)
17.6.2 Analysis of Product Motor Vehicles
378(1)
17.6.3 Limitations of the Sensitivity Analysis
379(1)
17.7 Concluding Remarks
379(2)
Appendix 17.A: Accuracy Dimension Checklist
381(3)
References
384(3)
18 Classification Error in Crime Victimization Surveys: A Markov Latent Class Analysis
387(26)
Marcus E. Berzofsky
Paul P. Biemer
18.1 Introduction
387(2)
18.2 Background
389(3)
18.2.1 Surveys of Crime Victimization
389(1)
18.2.2 Error Evaluation Studies
390(2)
18.3 Analytic Approach
392(4)
18.3.1 The NCVS and Its Relevant Attributes
392(1)
18.3.2 Description of Analysis Data Set, Victimization Indicators, and Covariates
392(2)
18.3.3 Technical Description of the MLC Model and Its Assumptions
394(2)
18.4 Model Selection
396(3)
18.4.1 Model Selection Process
396(2)
18.4.2 Model Selection Results
398(1)
18.5 Results
399(5)
18.5.1 Estimates of Misclassification
399(1)
18.5.2 Estimates of Classification Error Among Demographic Groups
399(5)
18.6 Discussion and Summary of Findings
404(3)
18.6.1 High False-Negative Rates in the NCVS
404(1)
18.6.2 Decreasing Prevalence Rates Over Time
405(1)
18.6.3 Classification Error among Demographic Groups
405(1)
18.6.4 Recommendations for Analysts
406(1)
18.6.5 Limitations
406(1)
18.7 Conclusions
407(1)
Appendix 18.A: Derivation of the Composite False-Negative Rate
407(1)
Appendix 18.B: Derivation of the Lower Bound for False-Negative Rates from a Composite Measure
408(1)
Appendix 18.C: Examples of Latent GOLD Syntax
408(2)
References
410(3)
19 Using Doorstep Concerns Data to Evaluate and Correct for Nonresponse Error in a Longitudinal Survey
413(20)
Ting Yan
19.1 Introduction
413(3)
19.2 Data and Methods
416(2)
19.2.1 Data
416(1)
19.2.2 Analytic Use of Doorstep Concerns Data
416(2)
19.3 Results
418(12)
19.3.1 Unit Response Rates in Later Waves and Average Number of Don't Know and Refused Answers
418(3)
19.3.2 Total Nonresponse Bias and Nonresponse Bias Components
421(1)
19.3.3 Adjusting for Nonresponse
421(9)
19.4 Discussion 428
Acknowledgment
430(1)
References
430(3)
20 Total Survey Error Assessment for Sociodemographic Subgroups in the 2012 U.S. National Immunization Survey
433(24)
Kirk M. Wolter
Vicki J. Pineau
Benjamin Skalland
Wei Zeng
James A. Singleton
Meena Khare
Zhen Zhao
David Yankey
Philip J. Smith
20.1 Introduction
433(1)
20.2 TSE Model Framework
434(3)
20.3 Overview of the National Immunization Survey
437(3)
20.4 National Immunization Survey: Inputs for TSE Model
440(5)
20.4.1 Stage 1: Sample-Frame Coverage Error
441(2)
20.4.2 Stage 2: Nonresponse Error
443(1)
20.4.3 Stage 3: Measurement Error
444(1)
20.5 National Immunization Survey TSE Analysis
445(7)
20.5.1 TSE Analysis for the Overall Age-Eligible Population
445(3)
20.5.2 TSE Analysis Sociodemographic Subgroups
448(4)
20.6 Summary
452(1)
References
453(4)
21 Establishing Infrastructure for the Use of Big Data to Understand Total Survey Error: Examples from Four Survey Research Organizations
Overview
457(10)
Brady T. West
Part 1 Big Data Infrastructure at the Institute for Employment Research (IAB)
458(1)
Kirchner
Daniela Hochfellner
Stefan Bender
21.1.1 Dissemination of Big Data for Survey Research at the Institute for Employment Research
458(1)
21.1.2 Big Data Linkages at the IAB and Total Survey Error
459(1)
21.1.2.1 Individual-Level Data: Linked Panel "Labour Market and Social Security" Survey Data and Administrative Data (PASS-ADIAB)
459(1)
21.1.2.2 Establishment Data: The IAB Establishment Panel and Administrative Registers as Sampling Frames
461(2)
21.1.3 Outlook
463(1)
Acknowledgments
464(1)
References
464(3)
Part 2 Using Administrative Records Data at the U.S. Census Bureau: Lessons Learned from Two Research Projects Evaluating Survey Data
467(7)
Elizabeth M. Nichols
Mary H. Mulry
Jennifer Hunter Childs
21.2.1 Census Bureau Research and Programs
467(1)
21.2.2 Using Administrative Data to Estimate Measurement Error in Survey Reports
468(1)
21.2.2.1 Address and Person Matching Challenges
469(1)
21.2.2.2 Event Matching Challenges
470(1)
21.2.2.3 Weighting Challenges
471(1)
21.2.2.4 Record Update Challenges
471(1)
21.2.2.5 Authority and Confidentiality Challenges
472(1)
21.2.3 Summary
472(1)
Acknowledgments and Disclaimers
472(2)
References 472
Part 3 Statistics New Zealand's Approach to Making Use of Alternative Data Sources in a New Era of Integrated Data
474(4)
Anders Holmberg
Christine Bycroft
21.3.1 Data Availability and Development of Data Infrastructure in New Zealand
475(1)
21.3.2 Quality Assessment and Different Types of Errors
476(1)
21.3.3 Integration of Infrastructure Components and Developmental Streams
477(1)
References 478
Part 4 Big Data Serving Survey Research: Experiences at the University of Michigan Survey Research Center
478(11)
Grant Benson
Frost Hubbard
21.4.1 Introduction
478(1)
21.4.2 Marketing Systems Group (MSG)
479(1)
21.4.2.1 Using MSG Age Information to Increase Sampling Efficiency
480(1)
21.4.3 MCH Strategic Data (MCH)
481(1)
21.4.3.1 Assessing MCH's Teacher Frame with Manual Listing Procedures
482(2)
21.4.4 Conclusion
484(1)
Acknowledgments and Disclaimers
484(1)
References
484(3)
Section 5: Estimation and Analysis 487(88)
22 Analytic Error as an Important Component of Total Survey Error: Results from a Meta-Analysis
489(22)
Brady T. West
Joseph W. Sakshaug
Yumi Kim
22.1 Overview
489(1)
22.2 Analytic Error as a Component of TSE
490(2)
22.3 Appropriate Analytic Methods for Survey Data
492(3)
22.4 Methods
495(2)
22.4.1 Coding of Published Articles
495(1)
22.4.2 Statistical Analyses
495(2)
22.5 Results
497(8)
22.5.1 Descriptive Statistics
497(2)
22.5.2 Bivariate Analyses
499(3)
22.5.3 Trends in Error Rates Over Time
502(3)
22.6 Discussion
505(3)
22.6.1 Summary of Findings
505(1)
22.6.2 Suggestions for Practice
506(1)
22.6.3 Limitations
506(1)
22.6.4 Directions for Future Research
507(1)
Acknowledgments
508(1)
References
508(3)
23 Mixed-Mode Research: Issues in Design and Analysis
511(20)
Joop Hox
Edith de Leeuw
Thomas Klausch
23.1 Introduction
511(1)
23.2 Designing Mixed-Mode Surveys
512(2)
23.3 Literature Overview
514(2)
23.4 Diagnosing Sources of Error in Mixed-Mode Surveys
516(7)
23.4.1 Distinguishing Between Selection and Measurement Effects: The Multigroup Approach
516(1)
23.4.1.1 Multigroup Latent Variable Approach
516(1)
23.4.1.2 Multigroup Observed Variable Approach
520(1)
23.4.2 Distinguishing Between Selection and Measurement Effects: The Counterfactual or Potential Outcome Approach
521(1)
23.4.3 Distinguishing Between Selection and Measurement Effects: The Reference Survey Approach
522(1)
23.5 Adjusting for Mode Measurement Effects
523(4)
23.5.1 The Multigroup Approach to Adjust for Mode Measurement Effects
523(1)
23.5.1.1 Multigroup Latent Variable Approach
523(1)
23.5.1.2 Multigroup Observed Variable Approach
525(1)
23.5.2 The Counterfactual (Potential Outcomes) Approach to Adjust for Mode Measurement Effects
525(1)
23.5.3 The Reference Survey Approach to Adjust for Mode Measurement Effects
526(1)
23.6 Conclusion
527(1)
References
528(3)
24 The Effect of Nonresponse and Measurement Error on Wage Regression across Survey Modes: A Validation Study
531(26)
Kirchner
Barbara Felderer
24.1 Introduction
531(1)
24.2 Nonresponse and Response Bias in Survey Statistics
532(2)
24.2.1 Bias in Regression Coefficients
532(1)
24.2.2 Research Questions
533(1)
24.3 Data and Methods
534(7)
24.3.1 Survey Data
534(1)
24.3.1.1 Sampling and Experimental Design
534(1)
24.3.1.2 Data Collection
535(1)
24.3.2 Administrative Data
536(1)
24.3.2.1 General Information
536(1)
24.3.2.2 Variable Selection
537(1)
24.3.2.3 Limitations
537(1)
24.3.2.4 Combined Data
537(1)
24.3.3 Bias in Univariate Statistics
538(1)
24.3.3.1 Bias: The Dependent Variable
538(1)
24.3.3.2 Bias: The Independent Variables
538(1)
24.3.4 Analytic Approach
539(2)
24.4 Results
541(5)
24.4.1 The Effect of Nonresponse and Measurement Error on Regression Coefficients
541(2)
24.4.2 Nonresponse Adjustments
543(3)
24.5 Summary and Conclusion
546(1)
Acknowledgments
547(1)
Appendix 24.A
548(1)
Appendix 24.B
549(5)
References
554(3)
25 Errors in Linking Survey and Administrative Data
557(18)
Joseph W. Sakshaug
Manfred Antoni
25.1 Introduction
557(2)
25.2 Conceptual Framework of Linkage and Error Sources
559(2)
25.3 Errors Due to Linkage Consent
561(4)
25.3.1 Evidence of Linkage Consent Bias
562(1)
25.3.2 Optimizing Linkage Consent Rates
563(1)
25.3.2.1 Placement of the Linkage Consent Request
563(1)
25.3.2.2 Wording of the Linkage Consent Request
563(1)
25.3.2.3 Active Versus Passive Consent
564(1)
25.3.2.4 Obtaining Linkage Consent in Longitudinal Surveys
564(1)
25.4 Erroneous Linkage with Unique Identifiers
565(2)
25.5 Erroneous Linkage with Nonunique Identifiers
567(1)
25.5.1 Common Nonunique Identifiers When Linking Data on People
567(1)
25.5.2 Common Nonunique Identifiers When Linking Data on Establishments
567(1)
25.6 Applications and Practical Guidance
568(3)
25.6.1 Applications
568(1)
25.6.2 Practical Guidance
569(1)
25.6.2.1 Initial Data Quality
570(1)
25.6.2.2 Preprocessing
570(1)
25.7 Conclusions and Take-Home Points
571(1)
References
571(4)
Index 575
Paul P. Biemer, PhD, is distinguished fellow at RTI International and associate director of Survey Research and Development at the Odum Institute, University of North Carolina, USA.

Edith de Leeuw, PhD, is professor of survey methodology in the Department of Methodology and Statistics at Utrecht University, the Netherlands.

Stephanie Eckman, PhD, is fellow at RTI International, USA.

Brad Edwards is vice president, director of Field Services, and deputy area director at Westat, USA.

Frauke Kreuter, PhD, is professor and director of the Joint Program in Survey Methodology, University of Maryland, USA; professor of statistics and methodology at the University of Mannheim, Germany; and head of the Statistical Methods Research Department at the Institute for Employment Research, Germany.

Lars E. Lyberg, PhD, is senior advisor at Inizio, Sweden.

N. Clyde Tucker, PhD, is principal survey methodologist at the American Institutes for Research, USA.

Brady T. West, PhD, is research associate professor in the Survey Research Center, located within the Institute for Social Research at the University of Michigan (U-M), and also serves as statistical consultant on the Consulting for Statistics, Computing and Analytics Research (CSCAR) team at U-M, USA.