Muutke küpsiste eelistusi

E-raamat: Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment

Edited by (Pew Research Center in Washington, DC), Edited by (University of Michigan), Edited by (University of Illinois-Chicago), Edited by (University of Michigan-Ann Arbor), Edited by (Utrecht University, The Netherlands), Edited by (Nielsen Media Research)
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 118,50 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

A thorough and comprehensive guide to the theoretical, practical, and methodological approaches used in survey experiments across disciplines such as political science, health sciences, sociology, economics, psychology, and marketing

This book explores and explains the broad range of experimental designs embedded in surveys that use both probability and non-probability samples. It approaches the usage of survey-based experiments with a Total Survey Error (TSE) perspective, which provides insight on the strengths and weaknesses of the techniques used.

Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment addresses experiments on within-unit coverage, reducing nonresponse, question and questionnaire design, minimizing interview measurement bias, using adaptive design, trend data, vignettes, the analysis of data from survey experiments, and other topics, across social, behavioral, and marketing science domains.

Each chapter begins with a description of the experimental method or application and its importance, followed by reference to relevant literature. At least one detailed original experimental case study then follows to illustrate the experimental method’s deployment, implementation, and analysis from a TSE perspective. The chapters conclude with theoretical and practical implications on the usage of the experimental method addressed. In summary, this book:

  • Fills a gap in the current literature by successfully combining the subjects of survey methodology and experimental methodology in an effort to maximize both internal validity and external validity
  • Offers a wide range of types of experimentation in survey research with in-depth attention to their various methodologies and applications
  • Is edited by internationally recognized experts in the field of survey research/methodology and in the usage of survey-based experimentation —featuring contributions from across a variety of disciplines in the social and behavioral sciences
  • Presents advances in the field of survey experiments, as well as relevant references in each chapter for further study
  • Includes more than 20 types of original experiments carried out within probability sample surveys
  • Addresses myriad practical and operational aspects for designing, implementing, and analyzing survey-based experiments by using a Total Survey Error perspective to address the strengths and weaknesses of each experimental technique and method

Experimental Methods in Survey Research: Techniques that Combine Random Sampling with Random Assignment is an ideal reference for survey researchers and practitioners in areas such political science, health sciences, sociology, economics, psychology, public policy, data collection, data science, and marketing. It is also a very useful textbook for graduate-level courses on survey experiments and survey methodology.

Paul J. Lavrakas, PhD, is Senior Fellow at the NORC at the University of Chicago, Adjunct Professor at University of Illinois-Chicago, Senior Methodologist at the Social Research Centre of Australian National University and at the Office for Survey Research at Michigan State University.

Michael W. Traugott, PhD, is Research Professor in the Institute for Social Research at the University of Michigan.

Courtney Kennedy, PhD, is Director of Survey Research at Pew Research Center in Washington, DC.

Allyson L. Holbrook, PhD, is Professor of Public Administration and Psychology at the University of Illinois-Chicago.

Edith D. de Leeuw, PhD, is Professor of Survey Methodology in the Department of Methodology and Statistics at Utrecht University.

Brady T. West, PhD, is Research Associate Professor in the Survey Research Center at the University of Michigan-Ann Arbor.

List of Contributors xix
Preface xxv
Judith Tanur
About the Companion Website xxix
1 Probability Survey-Based Experimentation and the Balancing of Internal and External Validity Concerns
1(22)
Paul J. Lavrakas
Courtney Kennedy
Edith D. de Leeuw
Brady T. West
Allyson L. Holbrook
Michael W. Traugott
1.1 Validity Concerns in Survey Research
3(2)
1.2 Survey Validity and Survey Error
5(1)
1.3 Internal Validity
6(2)
1.4 Threats to Internal Validity
8(1)
1.4.1 Selection
8(1)
1.4.2 History
9(1)
1.4.3 Instrumentation
10(1)
1.4.4 Mortality
10(1)
1.5 External Validity
11(1)
1.6 Pairing Experimental Designs with Probability Sampling
12(1)
1.7 Some Thoughts on Conducting Experiments with Online Convenience Samples
12(3)
1.8 The Contents of this Book
15(1)
References
15(4)
Part I Introduction to Section on Within-Unit Coverage 19(48)
Paul J. Lavrakas
Edith D. de Leeuw
2 Within-Household Selection Methods: A Critical Review and Experimental Examination
23(24)
Jolene D. Smyth
Kristen Olson
Mathew Stange
2.1 Introduction
23(1)
2.2 Within-Household Selection and Total Survey Error
24(1)
2.3 Types of Within-Household Selection Techniques
24(1)
2.4 Within-Household Selection in Telephone Surveys
25(1)
2.5 Within-Household Selection in Self-Administered Surveys
26(1)
2.6 Methodological Requirements of Experimentally Studying Within-Household Selection Methods
27(3)
2.7 Empirical Example
30(1)
2.8 Data and Methods
31(3)
2.9 Analysis Plan
34(1)
2.10 Results
35(1)
2.10.1 Response Rates
35(1)
2.10.2 Sample Composition
35(1)
2.10.3 Accuracy
37(3)
2.11 Discussion and Conclusions
40(2)
References
42(5)
3 Measuring Within-Household Contamination: The Challenge of Interviewing More Than One Member of a Household
47(22)
Colm O'Muircheartaigh
Stephen Smith
Jaclyn S. Wong
3.1 Literature Review
47(3)
3.2 Data and Methods
50(1)
3.2.1 The Partner Study Experimental Design and Evaluation
52(1)
Investigators
53(1)
Field/Project Directors
53(1)
3.2.1.1 Effects on Response Rates and Coverage
54(1)
3.3 The Sequence of Analyses
55(1)
3.4 Results
55(1)
3.4.1 Results of Experimental Group Assignment
55(2)
3.5 Effect on Standard Errors of the Estimates
57(1)
3.6 Effect on Response Rates
58(1)
3.6.1 Response Rates Among Primes
58(1)
3.6.2 Response Rates Among Partners
60(1)
3.7 Effect on Responses
61(1)
3.7.1 Cases Considered in the Analyses
61(1)
3.7.2 Sequence of Analyses
61(1)
3.7.3 Dependent Variables of Interest
61(1)
3.7.4 Effect of Experimental Treatment on Responses of Primes
61(1)
3.7.4.1 Missingness
61(1)
3.7.4.2 Substantive Answers
62(1)
3.7.5 Changes in Response Patterns Over Time Among Primes
62(1)
3.7.5.1 Missingness
62(1)
3.7.5.2 Substantive Answers
62(1)
3.7.5.3 Social Desirability
62(1)
3.7.5.4 Negative Questions
62(1)
3.7.5.5 Positive Questions
63(1)
3.7.5.6 Effect on Responses of Partners
63(1)
3.7.6 Conclusion
63(1)
3.7.6.1 Design
63(1)
3.7.6.2 Statistical Considerations
64(1)
3.8 Substantive Results
64(1)
References
64(3)
Part II Survey Experiments with Techniques to Reduce Nonresponse 67(44)
Edith D. de Leeuw
Paul J. Lavrakas
4 Survey Experiments on Interactions and Nonresponse: A Case Study of Incentives and Modes
69(20)
A. Bianchi
S. Biffignandi
4.1 Introduction
69(1)
4.2 Literature Overview
70(3)
4.3 Case Study: Examining the Interaction Between Incentives and Mode
73(1)
4.3.1 The Survey
73(1)
4.3.2 The Experiment
73(1)
4.3.3 Data and Methods
75(1)
4.3.4 Results of the Experiment: Incentives and Mode Effects
77(1)
4.3.4.1 Effects on Participation
77(1)
4.3.4.2 Effects on Subgroups
78(1)
4.3.4.3 Effects on Data Quality
80(1)
4.3.4.4 Effects on Costs
81(2)
4.4 Concluding Remarks
83(2)
Acknowledgments
85(1)
References
86(3)
5 Experiments on the Effects of Advance Letters in Surveys
89(24)
Susanne Vogt
Jennifer A. Parsons
Linda K. Owens
Paul J. Lavrakas
5.1 Introduction
89(1)
5.1.1 Why Advance Letters?
89(1)
5.1.2 What We Know About Effects of Advance Letters?
90(1)
5.1.2.1 Outcome Rates
90(1)
5.1.2.2 Nonresponse Bias
91(1)
5.1.2.3 Cost Effectiveness
91(1)
5.1.2.4 Properties of Advance Letters
93(1)
5.2 State of the Art on Experimentation on the Effect of Advance Letters
93(2)
5.3 Case Studies: Experimental Research on the Effect of Advance Letters
95(1)
5.4 Case Study I: Violence Against Men in Intimate Relationships
96(1)
5.4.1 Sample Design
96(1)
5.4.2 Fieldwork
97(1)
5.4.3 Sampling Bias
97(1)
5.4.4 Analytical Strategies
97(1)
5.4.5 Results
98(1)
5.4.5.1 Effect of Advance Letters on Outcome Rates and Reasons for Refusals
98(1)
5.4.5.2 Recruitment Effort
98(1)
5.4.5.3 Effect of Advance Letters on Reporting on Sensitive Topics
98(1)
5.4.6 Discussion
99(1)
5.5 Case Study II: The Neighborhood Crime and Justice Study
100(1)
5.5.1 Study Design
100(1)
5.5.2 Experimental Design
100(1)
5.5.3 Fieldwork
101(1)
5.5.4 Analytical Strategies
102(1)
5.5.5 Results
103(1)
5.5.6 Discussion
103(3)
5.6 Discussion
106(1)
5.7 Research Agenda for the Future
107(1)
References
108(3)
Part III Overview of the Section on the Questionnaire 111(84)
Allyson Holbrook
Michael W. Traugott
6 Experiments on the Design and Evaluation of Complex Survey Questions
113(18)
Paul Beatty
Carol Cosenza
Floyd J. Fowler Jr
6.1 Question Construction: Dangling Qualifiers
115(2)
6.2 Overall Meanings of Question Can Be Obscured by Detailed Words
117(2)
6.3 Are Two Questions Better than One?
119(2)
6.4 The Use of Multiple Questions to Simplify Response Judgments
121(1)
6.5 The Effect of Context or Framing on Answers
122(1)
6.5.1 Alcohol
123(1)
6.5.2 Guns
123(1)
6.6 Do Questionnaire Effects Vary Across Sub-groups of Respondents?
124(2)
6.7 Discussion
126(2)
References
128(3)
7 Impact of Response Scale Features on Survey Responses to Behavioral Questions
131(20)
Florian Keusch
Ting Yan
7.1 Introduction
131(1)
7.2 Previous Work on Scale Design Features
132(1)
7.2.1 Scale Direction
132(1)
7.2.2 Scale Alignment
133(1)
7.2.3 Verbal Scale Labels
133(1)
7.3 Methods
134(2)
7.4 Results
136(5)
7.5 Discussion
141(2)
Acknowledgment
143(1)
7.A Question Wording
143(1)
7.A.1 Experimental Questions (One Question Per Screen)
143(1)
7.A.2 Validation Questions (One Per Screen)
144(1)
7.A.3 GfK Profile Questions (Not Part of the Questionnaire)
145(1)
7.B Test of Interaction Effects
145(1)
References
146(5)
8 Mode Effects Versus Question Format Effects: An Experimental Investigation of Measurement Error Implemented in a Probability-Based Online Panel
151(16)
Edith D. de Leeuw
Joop Hox
Annette Scherpenzeel
8.1 Introduction
151(1)
8.1.1 Online Surveys Advantages and Challenges
151(1)
8.1.2 Probability-Based Online Panels
152(1)
8.1.3 Mode Effects and Measurement
152(1)
8.2 Experiments and Probability-Based Online Panels
153(1)
8.2.1 Online Experiments in Probability-Based Panels: Advantages
153(1)
8.2.2 Examples of Experiments in the LISS Panel
154(1)
8.3 Mixed-Mode Question Format Experiments
154(1)
8.3.1 Theoretical Background
154(1)
8.3.2 Method
156(1)
8.3.2.1 Respondents
156(1)
8.3.2.2 Experimental Design
156(1)
8.3.2.3 Questionnaire
157(1)
8.3.3 Analyses and Results
157(1)
8.3.3.1 Response and Weighting
157(1)
8.3.3.2 Reliability
157(1)
8.3.3.3 Response Styles
158(3)
8.4 Summary and Discussion
161(1)
Acknowledgments
162(1)
References
162(5)
9 Conflicting Cues: Item Nonresponse and Experimental Mortality
167(14)
David J. Ciuk
Berwood A. Yost
9.1 Introduction
167(1)
9.2 Survey Experiments and Item Nonresponse
167(3)
9.3 Case Study: Conflicting Cues and Item Nonresponse
170(1)
9.4 Methods
170(1)
9.5 Issue Selection
171(1)
9.6 Experimental Conditions and Measures
172(1)
9.7 Results
173(1)
9.8 Addressing Item Nonresponse in Survey Experiments
174(4)
9.9 Summary
178(1)
References
179(2)
10 Application of a List Experiment at the Population Level: The Case of Opposition to immigration in the Netherlands
181(16)
Mathew J. Creighton
Philip S. Brenner
Peter Schmidt
Diana Zavala-Rojas
10.1 Fielding the Item Count Technique (ICT)
183(2)
10.2 Analyzing the Item Count Technique (ICT)
185(1)
10.3 An Application of ICT: Attitudes Toward Immigrants in the Netherlands
186(1)
10.3.1 Data - The Longitudinal Internet Studies for the Social Sciences Panel
186(1)
10.3.2 Implicit and Explicit Agreement
187(1)
10.3.3 Social Desirability Bias (SDB)
188(2)
10.4 Limitations of ICT
190(2)
References
192(3)
Part IV Introduction to Section on Interviewers 195(50)
Brady T. West
Edith D. de Leeuw
11 Race- and Ethnicity-of-Interviewer Effects
197(28)
Allyson L. Holbrook
Timothy P. Johnson
Maria Krysan
11.1 Introduction
197(1)
11.1.1 The Problem
197(1)
11.1.2 Theoretical Explanations
198(1)
11.1.2.1 Survey Participation
198(1)
11.1.2.2 Survey Responses
199(1)
11.1.3 Existing Evidence
199(1)
11.1.3.1 Survey Participation
199(1)
11.1.3.2 Survey Responses in Face-to-Face Interviews
200(1)
11.1.3.3 Survey Responses in Telephone Interviews
202(1)
11.1.3.4 Survey Responses in Surveys that Are Not Interviewer-Administered
204(1)
11.1.3.5 Other Race/Ethnicity of Interviewer Research
204(1)
11.2 The Current Research
205(2)
11.3 Respondents and Procedures
207(1)
11.4 Measures
207(1)
11.4.1 Interviewer Race/Ethnicity
207(1)
11.4.2 Sample Frame Variables
208(1)
11.4.2.1 Disposition Codes
208(1)
11.4.2.2 Neighborhood Level Variables
208(1)
11.4.3 Respondent Variables
208(1)
11.4.3.1 Race/Ethnicity
208(1)
11.4.3.2 Perceived Race/Ethnicity
209(1)
11.4.3.3 Confidence
209(1)
11.4.3.4 Male
209(1)
11.4.3.5 Age
209(1)
11.4.3.6 Years of Education
209(1)
11.4.3.7 Household Income
209(1)
11.4.3.8 Political Ideology
209(1)
11.4.3.9 Affirmative Action
209(1)
11.4.3.10 Immigration Policy
210(1)
11.4.3.11 Attitudes Toward Obama
210(1)
11.5 Analysis
210(1)
11.5.1 Sample Frame Analyses
210(1)
11.5.2 Survey Sample Analyses
211(1)
11.6 Results
211(1)
11.6.1 Sample Frame
211(1)
11.6.1.1 Survey Participation or Response
212(1)
11.6.1.2 Refusals
212(1)
11.6.1.3 Contact
213(1)
11.6.2 Perceptions of Interviewer Race
214(1)
11.6.3 Survey Responses
215(4)
11.7 Discussion and Conclusion
219(1)
11.7.1 Limitations
220(1)
11.7.2 Future Work
221(1)
References
221(4)
12 Investigating Interviewer Effects and Confounds in Survey-Based Experimentation
225(22)
Paul J. Lavrakas
Jenny Kelly
Colleen McClain
12.1 Studying Interviewer Effects Using a Post hoc Experimental Design
226(1)
12.1.1 An Example of a Post hoc Investigation of Interviewer Effects
228(2)
12.2 Studying Interviewer Effects Using A priori Experimental Designs
230(1)
12.2.1 Assignment of Experimental Treatments to Interviewers
230(2)
12.3 An Original Experiment on the Effects of Interviewers Administering Only One Treatment vs. Interviewers Administrating Multiple Treatments
232(1)
12.3.1 The Design of Our Survey-Based Experiment
232(1)
12.3.2 Analytic Approach
233(1)
12.3.3 Experimental Design Findings
234(5)
12.4 Discussion
239(3)
References
242(3)
Part V Introduction to Section on Adaptive Design 245(46)
Courtney Kennedy
Brady T. West
13 Using Experiments to Assess Interactive Feedback That Improves Response Quality in Web Surveys
247(28)
Tanja Kunz
Marek Fuchs
13.1 Introduction
247(1)
13.1.1 Response Quality in Web Surveys
247(1)
13.1.2 Dynamic Interventions to Enhance Response Quality in Web Surveys
249(2)
13.2 Case Studies - Interactive Feedback in Web Surveys
251(7)
13.3 Methodological Issues in Experimental Visual Design Studies
258(1)
13.3.1 Between-Subjects Design
258(1)
13.3.2 Field Experiments vs. Lab Experiments
259(1)
13.3.3 Replication of Experiments
259(1)
13.3.4 Randomization in the Net Sample
260(1)
13.3.5 Independent Randomization
262(1)
13.3.6 Effects of Randomization on the Composition of the Experimental Conditions
262(1)
13.3.7 Differential Unit Nonresponse
263(1)
13.3.8 Nonresponse Bias and Generalizability
264(1)
13.3.9 Breakoff as an Indicator of Motivational Confounding
264(1)
13.3.10 Manipulation Check
265(1)
13.3.11 A Direction for Future Research: Causal Mechanisms
268(1)
References
269(6)
14 Randomized Experiments for Web-Mail Surveys Conducted Using Address-Based Samples of the General Population
275(18)
Z. Tuba Suzer-Gurtekin
Mahmoud Elkasabi
James M. Lepkowski
Mingnan Liu
Richard Curtin
14.1 Introduction
275(1)
14.1.1 Potential Advantages of Web-Mail Designs
275(1)
14.1.2 Design Considerations with Web-Mail Surveys
276(1)
14.1.3 Empirical Findings from General Population Web-Mail Studies
277(1)
14.2 Study Design and Methods
278(1)
14.2.1 Experimental Design
278(1)
14.2.2 Questionnaire Design
279(1)
14.2.3 Weights
280(1)
14.2.4 Statistical Methods
280(1)
14.3 Results
281(4)
14.4 Discussion
285(2)
References
287(4)
Part VI Introduction to Section on Special Surveys 291(36)
Michael W. Traugott
Edith D. de Leeuw
15 Mounting Multiple Experiments on Longitudinal Social Surveys: Design and Implementation Considerations
293(16)
Peter Lynn
Annette tackle
15.1 Introduction and Overview
293(1)
15.2 Types of Experiments that Can Be Mounted in a Longitudinal Survey
294(1)
15.3 Longitudinal Experiments and Experiments in Longitudinal Surveys
295(1)
15.4 Longitudinal Surveys that Serve as Platforms for Experimentation
296(2)
15.5 The Understanding Society Innovation Panel
298(1)
15.6 Avoiding Confounding of Experiments
299(2)
15.7 Allocation Procedures
301(1)
15.7.1 Assignment Within or Between Households and Interviewers?
302(1)
15.7.2 Switching Treatments Between Waves
303(1)
15.8 Refreshment Samples
304(1)
15.9 Discussion
305(1)
15.A Appendix: Stata Syntax to Produce Table 15.3 Treatment Allocations
306(1)
References
306(3)
16 Obstacles and Opportunities for Experiments in Establishment Surveys Supporting Official Statistics
309(20)
Diane K. Willimack
Jaki S. McCarthy
16.1 Introduction
309(1)
16.2 Some Key Differences Between Household and Establishment Surveys
310(1)
16.2.1 Unit of Analysis
310(1)
16.2.2 Statistical Products
310(1)
16.2.3 Highly Skewed or Small Specialized Target Populations
310(1)
16.2.4 Sample Designs and Response Burden
311(1)
16.2.5 Question Complexity and Availability of Data in Records
311(1)
16.2.6 Labor-intensive Response Process
311(1)
16.2.7 Mandatory Reporting
312(1)
16.2.8 Availability of Administrative and Auxiliary Data
312(1)
16.3 Existing Literature Featuring Establishment Survey Experiments
312(1)
16.3.1 Target Populations
312(1)
16.3.2 Research Purpose
313(1)
16.3.3 Complexity and Context
313(1)
16.4 Key Considerations for Experimentation in Establishment Surveys
314(1)
16.4.1 Embed the Experiment Within Production
314(1)
16.4.2 Conduct Standalone Experiments
315(1)
16.4.3 Use Nonsample Cases
315(1)
16.4.4 Exclude Large Establishments
315(1)
16.4.5 Employ Risk Mitigation Strategies and Alternatives
316(2)
16.5 Examples of Experimentation in Establishment Surveys
318(1)
16.5.1 Example 1: Adaptive Design in the Agricultural Resource Management Survey
318(1)
16.5.1.1 Experimental Strategy: Embed the Experiment Within the Production Survey
318(1)
16.5.1.2 Experimental Strategy: Exclude Large Units from Experimental Design
319(1)
16.5.2 Example 2: Questionnaire Testing for the Census of Agriculture
319(1)
16.5.2.1 Experimental Strategy: Conduct a Standalone Experiment
319(1)
16.5.2.2 Experimental Strategy: Exclude Large Establishments
320(1)
16.5.3 Example 3: Testing Contact Strategies for the Economic Census
320(1)
16.5.3.1 Experimental Strategy: Embed Experiments in Production Surveys
321(1)
16.5.4 Example 4: Testing Alternative Question Styles for the Economic Census
321(1)
16.5.4.1 Experimental Strategy: Use Nonproduction Cases Alongside a Production Survey
321(1)
16.5.5 Example 5: Testing Adaptive Design Alternatives in the Annual Survey of Manufactures
322(1)
16.5.5.1 Experimental Strategy: Exclude Large Units
322(1)
16.5.5.2 Experimental Strategy: Embed the Experiment Within the Production Survey
322(1)
16.6 Discussion and Concluding Remarks
323(1)
Acknowledgments
324(1)
References
324(3)
Part VII Introduction to Section on Trend Data 327(42)
Michael W. Traugott
Paul J. Lavrakas
17 Tracking Question-Wording Experiments Across Time in the General Social Survey, 1984-2014
329(14)
Tom W. Smith
Jaesok Son
17.1 Introduction
329(1)
17.2 GSS Question-Wording Experiment on Spending Priorities
330(1)
17.3 Experimental Analysis
330(8)
17.4 Summary and Conclusion
338(1)
17.A National Spending Priority Items
339(1)
References
340(3)
18 Survey Experiments and Changes in Question Wording in Repeated Cross-Sectional Surveys
343(28)
Allyson L. Holbrook
David Sterrett
Andrew W. Crosby
Marina Stavrakantonaki
Xiaoheng Wang
Tianshu Zhao
Timothy P. Johnson
18.1 Introduction
343(1)
18.2 Background
344(1)
18.2.1 Repeated Cross-Sectional Surveys
344(1)
18.2.2 Reasons to Change Question Wording in Repeated Cross-Sectional Surveys
345(1)
18.2.2.1 Methodological Advances
345(1)
18.2.2.2 Changes in Language and Meaning
346(1)
18.2.3 Current Practices and Expert Insight
346(1)
18.3 Two Case Studies
347(1)
18.3.1 ANES
347(1)
18.3.1.1 Description of Question Wording Experiment
347(1)
18.3.1.2 Data Collection
348(1)
18.3.1.3 Measures
349(1)
18.3.1.4 Analysis
352(1)
18.3.1.5 Results
352(1)
18.3.1.6 Implications for Trend Analysis
355(1)
18.3.2 General Social Survey (GSS)
356(1)
18.3.2.1 Purpose of Experiment
356(1)
18.3.2.2 Data Collection
356(1)
18.3.2.3 Measures
356(1)
18.3.2.4 Analysis
357(1)
18.3.2.5 Results
360(2)
18.4 Implications and Conclusions
362(1)
18.4.1 Limitations and Suggestions for Future Research
363(1)
Acknowledgments
364(1)
References
364(5)
Part VIII Vignette Experiments in Surveys 369(48)
Allyson Holbrook
Paul J. Lavrakas
19 Are Factorial Survey Experiments Prone to Survey Mode Effects?
371(22)
Katrin Auspurg
Thomas Hinz
Sandra Walzenbach
19.1 Introduction
371(1)
19.2 Idea and Scope of Factorial Survey Experiments
372(1)
19.3 Mode Effects
373(1)
19.3.1 Typical Respondent Samples and Modes in Factorial Survey Experiments
373(1)
19.3.2 Design Features and Mode Effects
374(1)
19.3.2.1 Selection of Dimensions and Levels
374(1)
19.3.2.2 Generation of Questionnaire Versions and Allocation to Respondents
375(1)
19.3.2.3 Number of Vignettes per Respondent and Response Scales
376(1)
19.3.2.4 Interviewer Effects and Social Desirability Bias
377(1)
19.3.3 Summing Up: Mode Effects
378(1)
19.4 Case Study
378(1)
19.4.1 Data and Methods
378(1)
19.4.1.1 Survey Details
378(1)
19.4.1.2 Analysis Techniques
380(1)
19.4.2 Mode Effects Regarding Nonresponse and Measurement Errors
380(1)
19.4.2.1 Nonresponse
380(1)
19.4.2.2 Response Quality
382(3)
19.4.3 Do Data Collection Mode Effects Impact Substantive Results?
385(1)
19.4.3.1 Point Estimates and Significance Levels
385(1)
19.4.3.2 Measurement Efficiency and Sources of Unexplained Variance
387(1)
19.5 Conclusion
388(2)
References
390(3)
20 Validity Aspects of Vignette Experiments: Expected "What-If" Differences Between Reports of Behavioral Intentions and Actual Behavior
393(26)
Stefanie Eifler
Knut Petzold
20.1 Outline of the Problem
393(1)
20.1.1 Problem
393(1)
20.1.2 Vignettes, the Total Survey Error Framework, and Cognitive Response Process Perspectives
394(1)
20.1.3 State of the Art
395(1)
20.1.4 Research Strategy for Evaluating the External Validity of Vignette Experiments
397(2)
20.2 Research Findings from Our Experimental Work
399(1)
20.2.1 Experimental Design for Both Parts of the Study
399(1)
20.2.2 Data Collection for Both Parts of the Study
400(1)
20.2.2.1 Field Experiment
400(1)
20.2.2.2 Vignette Experiment
402(3)
20.2.3 Results for Each Part of the Study
405(1)
20.2.3.1 Field Experiment
405(1)
20.2.3.2 Vignette Experiment
405(1)
20.2.4 Systematic Comparison of the Two Experiments
406(1)
20.2.4.1 Assumptions and Precautions About the Comparability of Settings, Treatments, and Outcomes
406(1)
20.2.4.2 Assumptions and Precautions About the Comparability of Units
407(1)
20.2.4.3 Results
408(3)
20.3 Discussion
411(2)
References
413(4)
Part IX Introduction to Section on Analysis 417(84)
Brady T. West
Courtney Kennedy
21 Identities and intersectionality: A Case for Purposive Sampling in Survey-Experimental Research
419(16)
Samara Klar
Thomas J. Leeper
21.1 Introduction
419(1)
21.2 Common Techniques for Survey Experiments on Identity
420(1)
21.2.1 Purposive Samples
422(1)
21.2.2 The Question of Representativeness: An Answer in Replication
423(1)
21.2.3 Response Biases and Determination of Eligibility
425(1)
21.3 How Limited Are Representative Samples for Intersectionality Research?
426(1)
21.3.1 Example from TESS
426(1)
21.3.2 Example from a Community Association-Based LGBTQ Sample
427(1)
21.3.3 Example from Hispanic Women Sample in Tucson, AZ
430(1)
21.4 Conclusions and Discussion
430(1)
Author Biographies
431(1)
References
431(4)
22 Designing Probability Samples to Study Treatment Effect Heterogeneity
435(22)
Elizabeth Tipton
David S. Yeager
Ronaldo Iachan
Barbara Schneider
22.1 Introduction
435(1)
22.1.1 Probability Samples Facilitate Estimation of Average Treatment Effects
437(1)
22.1.2 Treatment Effect Heterogeneity in Experiments
438(1)
22.1.3 Estimation of Subgroup Treatment Effects
440(1)
22.1.3.1 Problems with Sample Selection Bias
440(1)
22.1.3.2 Problems with Small Sample Sizes
440(1)
22.1.4 Moderator Analyses for Understanding Causal Mechanisms
441(1)
22.1.4.1 Problems of Confounder Bias in Probability Samples
442(1)
22.1.4.2 Additional Confounder Bias Problems with Nonprobability Samples
442(1)
22.1.4.3 Problems with Statistical Power
443
22.1.5 Stratification for Studying Heterogeneity
411(1)
22.1.5.1 Overview
444(1)
22.1.5.2 Operationalizing Moderators
445(1)
22.1.5.3 Orthogonalizing Moderators
445(1)
22.1.5.4 Stratum Creation and Allocation
445(1)
22.2 Nesting a Randomized Treatment in a National Probability Sample: The NSLM
446(1)
22.2.1 Design of the NSLM
446(1)
22.2.1.1 Population Frame
446(1)
22.2.1.2 Two-Stage Design
447(1)
22.2.1.3 Nonresponse
447(1)
22.2.1.4 Experimental Design
447(1)
22.2.1.5 Treatment
447(1)
22.2.1.6 Outcomes
447(1)
22.2.2 Developing a Theory of Treatment Effect Heterogeneity
447(1)
22.2.2.1 Goals
447(1)
22.2.2.2 Potential Moderators
447(1)
22.2.2.3 Operationalizing School Achievement Level
448(1)
22.2.2.4 Orthogonalizing for Minority Composition
449(1)
22.2.3 Developing the Final Design
449(1)
22.2.3.1 Stratum Creation
449(1)
22.2.3.2 Stratum Allocation
449(1)
22.2.3.3 Implementing Sample Selection
451(1)
22.2.3.4 Final Sample
451(1)
22.3 Discussion and Conclusions
451(1)
22.3.1 Estimands
452(1)
22.3.2 Contextual Effects Matter
452(1)
22.3.3 Plan for Moderators
452(1)
22.3.4 Designing Strata
452(1)
22.3.5 Power Concerns
453(1)
Acknowledgments
453(1)
References
453(4)
23 Design-Based Analysis of Experiments Embedded in Probability Samples
457(24)
Jan A. van den Brakel
23.1 Introduction
457(1)
23.2 Design of Embedded Experiments
458(2)
23.3 Design-Based Inference for Embedded Experiments with One Treatment Factor
460(1)
23.3.1 Measurement Error Model and Hypothesis Testing
460(1)
23.3.2 Point and Variance Estimation
462(1)
23.3.3 Wald Test
464(1)
23.3.4 Special Cases
464(1)
23.3.5 Hypotheses About Ratios and Totals
465(1)
23.4 Analysis of Experiments with Clusters of Sampling Units as Experimental Units
466(2)
23.5 Factorial Designs
468(1)
23.5.1 Designing Embedded K x L Factorial Designs
468(1)
23.5.2 Testing Hypotheses About Main Effects and Interactions in K x L Embedded Factorial Designs
469(1)
23.5.3 Special Cases
471(1)
23.5.4 Generalizations
471(1)
23.6 A Mixed-Mode Experiment in the Dutch Crime Victimization Survey
472(1)
23.6.1 Introduction
472(1)
23.6.2 Survey Design and Experimental Design
472(1)
23.6.3 Software
474(1)
23.6.4 Results
474(3)
23.7 Discussion
477(1)
Acknowledgments
478(1)
References
478(3)
24 Extending the Within-Persons Experimental Design: The Multitrait-Multierror (MTME) Approach
481(20)
Alexandru Cemat
Daniel L. Oberski
24.1 Introduction
481(1)
24.2 The Multitrait-Multierror (MTME) Framework
482(1)
24.2.1 A Simple but Defective Design: The Interview-Reinterview from the MTME Perspective
483(1)
24.2.2 Designs Estimating Stochastic Survey Errors
485(2)
24.3 Designing the MTME Experiment
487(1)
24.3.1 What are the main types of measurement errors that should be estimated?
487(1)
24.3.2 How can the questions be manipulated in order to estimate these types of error?
487(1)
24.3.3 Is it possible to manipulate the form pair order?
488(1)
24.3.4 Is there enough power to estimate the model?
488(1)
24.3.5 How can data collection minimize memory effects?
489(1)
24.4 Statistical Estimation for the MTME Approach
489(2)
24.5 Measurement Error in Attitudes Toward Migrants in the UK
491(1)
24.5.1 Estimating Four Stochastic Error Variances Using MTME
491(1)
24.5.1.1 What are the main types of measurement errors that should be estimated?
491(1)
24.5.1.2 How can the questions be manipulated in order to estimate these types of errors?
492(1)
24.5.1.3 Is it possible to manipulate the form pair order?
492(1)
24.5.1.4 Is there enough power to estimate the model?
492(1)
24.5.1.5 How can data collection minimize memory effects?
493(1)
24.5.2 Estimating MTME with four stochastic errors
493(1)
24.6 Results
494(3)
24.7 Conclusions and Future Research Directions
497(1)
Acknowledgments
498(1)
References
498(3)
Index 501
Paul J. Lavrakas, PhD, is Senior Fellow at the NORC at the University of Chicago, Adjunct Professor at University of Illinois-Chicago, Senior Methodologist at the Social Research Centre of Australian National University and at the Office for Survey Research at Michigan State University.

Michael W. Traugott, PhD, is Research Professor in the Institute for Social Research at the University of Michigan.

Courtney Kennedy, PhD, is Director of Survey Research at Pew Research Center in Washington, DC.

Allyson L. Holbrook, PhD, is Professor of Public Administration and Psychology at the University of Illinois-Chicago.

Edith D. de Leeuw, PhD, is Professor of Survey Methodology in the Department of Methodology and Statistics at Utrecht University.

Brady T. West, PhD, is Research Associate Professor in the Survey Research Center at the University of Michigan-Ann Arbor.