List of Contributors |
|
xix | |
Preface |
|
xxv | |
|
About the Companion Website |
|
xxix | |
|
1 Probability Survey-Based Experimentation and the Balancing of Internal and External Validity Concerns |
|
|
1 | (22) |
|
|
|
|
|
|
|
1.1 Validity Concerns in Survey Research |
|
|
3 | (2) |
|
1.2 Survey Validity and Survey Error |
|
|
5 | (1) |
|
|
6 | (2) |
|
1.4 Threats to Internal Validity |
|
|
8 | (1) |
|
|
8 | (1) |
|
|
9 | (1) |
|
|
10 | (1) |
|
|
10 | (1) |
|
|
11 | (1) |
|
1.6 Pairing Experimental Designs with Probability Sampling |
|
|
12 | (1) |
|
1.7 Some Thoughts on Conducting Experiments with Online Convenience Samples |
|
|
12 | (3) |
|
1.8 The Contents of this Book |
|
|
15 | (1) |
|
|
15 | (4) |
Part I Introduction to Section on Within-Unit Coverage |
|
19 | (48) |
|
|
|
2 Within-Household Selection Methods: A Critical Review and Experimental Examination |
|
|
23 | (24) |
|
|
|
|
|
23 | (1) |
|
2.2 Within-Household Selection and Total Survey Error |
|
|
24 | (1) |
|
2.3 Types of Within-Household Selection Techniques |
|
|
24 | (1) |
|
2.4 Within-Household Selection in Telephone Surveys |
|
|
25 | (1) |
|
2.5 Within-Household Selection in Self-Administered Surveys |
|
|
26 | (1) |
|
2.6 Methodological Requirements of Experimentally Studying Within-Household Selection Methods |
|
|
27 | (3) |
|
|
30 | (1) |
|
|
31 | (3) |
|
|
34 | (1) |
|
|
35 | (1) |
|
|
35 | (1) |
|
2.10.2 Sample Composition |
|
|
35 | (1) |
|
|
37 | (3) |
|
2.11 Discussion and Conclusions |
|
|
40 | (2) |
|
|
42 | (5) |
|
3 Measuring Within-Household Contamination: The Challenge of Interviewing More Than One Member of a Household |
|
|
47 | (22) |
|
|
|
|
|
47 | (3) |
|
|
50 | (1) |
|
3.2.1 The Partner Study Experimental Design and Evaluation |
|
|
52 | (1) |
|
|
53 | (1) |
|
|
53 | (1) |
|
3.2.1.1 Effects on Response Rates and Coverage |
|
|
54 | (1) |
|
3.3 The Sequence of Analyses |
|
|
55 | (1) |
|
|
55 | (1) |
|
3.4.1 Results of Experimental Group Assignment |
|
|
55 | (2) |
|
3.5 Effect on Standard Errors of the Estimates |
|
|
57 | (1) |
|
3.6 Effect on Response Rates |
|
|
58 | (1) |
|
3.6.1 Response Rates Among Primes |
|
|
58 | (1) |
|
3.6.2 Response Rates Among Partners |
|
|
60 | (1) |
|
|
61 | (1) |
|
3.7.1 Cases Considered in the Analyses |
|
|
61 | (1) |
|
3.7.2 Sequence of Analyses |
|
|
61 | (1) |
|
3.7.3 Dependent Variables of Interest |
|
|
61 | (1) |
|
3.7.4 Effect of Experimental Treatment on Responses of Primes |
|
|
61 | (1) |
|
|
61 | (1) |
|
3.7.4.2 Substantive Answers |
|
|
62 | (1) |
|
3.7.5 Changes in Response Patterns Over Time Among Primes |
|
|
62 | (1) |
|
|
62 | (1) |
|
3.7.5.2 Substantive Answers |
|
|
62 | (1) |
|
3.7.5.3 Social Desirability |
|
|
62 | (1) |
|
3.7.5.4 Negative Questions |
|
|
62 | (1) |
|
3.7.5.5 Positive Questions |
|
|
63 | (1) |
|
3.7.5.6 Effect on Responses of Partners |
|
|
63 | (1) |
|
|
63 | (1) |
|
|
63 | (1) |
|
3.7.6.2 Statistical Considerations |
|
|
64 | (1) |
|
|
64 | (1) |
|
|
64 | (3) |
Part II Survey Experiments with Techniques to Reduce Nonresponse |
|
67 | (44) |
|
|
|
4 Survey Experiments on Interactions and Nonresponse: A Case Study of Incentives and Modes |
|
|
69 | (20) |
|
|
|
|
69 | (1) |
|
|
70 | (3) |
|
4.3 Case Study: Examining the Interaction Between Incentives and Mode |
|
|
73 | (1) |
|
|
73 | (1) |
|
|
73 | (1) |
|
|
75 | (1) |
|
4.3.4 Results of the Experiment: Incentives and Mode Effects |
|
|
77 | (1) |
|
4.3.4.1 Effects on Participation |
|
|
77 | (1) |
|
4.3.4.2 Effects on Subgroups |
|
|
78 | (1) |
|
4.3.4.3 Effects on Data Quality |
|
|
80 | (1) |
|
|
81 | (2) |
|
|
83 | (2) |
|
|
85 | (1) |
|
|
86 | (3) |
|
5 Experiments on the Effects of Advance Letters in Surveys |
|
|
89 | (24) |
|
|
|
|
|
|
89 | (1) |
|
5.1.1 Why Advance Letters? |
|
|
89 | (1) |
|
5.1.2 What We Know About Effects of Advance Letters? |
|
|
90 | (1) |
|
|
90 | (1) |
|
|
91 | (1) |
|
5.1.2.3 Cost Effectiveness |
|
|
91 | (1) |
|
5.1.2.4 Properties of Advance Letters |
|
|
93 | (1) |
|
5.2 State of the Art on Experimentation on the Effect of Advance Letters |
|
|
93 | (2) |
|
5.3 Case Studies: Experimental Research on the Effect of Advance Letters |
|
|
95 | (1) |
|
5.4 Case Study I: Violence Against Men in Intimate Relationships |
|
|
96 | (1) |
|
|
96 | (1) |
|
|
97 | (1) |
|
|
97 | (1) |
|
5.4.4 Analytical Strategies |
|
|
97 | (1) |
|
|
98 | (1) |
|
5.4.5.1 Effect of Advance Letters on Outcome Rates and Reasons for Refusals |
|
|
98 | (1) |
|
5.4.5.2 Recruitment Effort |
|
|
98 | (1) |
|
5.4.5.3 Effect of Advance Letters on Reporting on Sensitive Topics |
|
|
98 | (1) |
|
|
99 | (1) |
|
5.5 Case Study II: The Neighborhood Crime and Justice Study |
|
|
100 | (1) |
|
|
100 | (1) |
|
5.5.2 Experimental Design |
|
|
100 | (1) |
|
|
101 | (1) |
|
5.5.4 Analytical Strategies |
|
|
102 | (1) |
|
|
103 | (1) |
|
|
103 | (3) |
|
|
106 | (1) |
|
5.7 Research Agenda for the Future |
|
|
107 | (1) |
|
|
108 | (3) |
Part III Overview of the Section on the Questionnaire |
|
111 | (84) |
|
|
|
6 Experiments on the Design and Evaluation of Complex Survey Questions |
|
|
113 | (18) |
|
|
|
|
6.1 Question Construction: Dangling Qualifiers |
|
|
115 | (2) |
|
6.2 Overall Meanings of Question Can Be Obscured by Detailed Words |
|
|
117 | (2) |
|
6.3 Are Two Questions Better than One? |
|
|
119 | (2) |
|
6.4 The Use of Multiple Questions to Simplify Response Judgments |
|
|
121 | (1) |
|
6.5 The Effect of Context or Framing on Answers |
|
|
122 | (1) |
|
|
123 | (1) |
|
|
123 | (1) |
|
6.6 Do Questionnaire Effects Vary Across Sub-groups of Respondents? |
|
|
124 | (2) |
|
|
126 | (2) |
|
|
128 | (3) |
|
7 Impact of Response Scale Features on Survey Responses to Behavioral Questions |
|
|
131 | (20) |
|
|
|
|
131 | (1) |
|
7.2 Previous Work on Scale Design Features |
|
|
132 | (1) |
|
|
132 | (1) |
|
|
133 | (1) |
|
7.2.3 Verbal Scale Labels |
|
|
133 | (1) |
|
|
134 | (2) |
|
|
136 | (5) |
|
|
141 | (2) |
|
|
143 | (1) |
|
|
143 | (1) |
|
7.A.1 Experimental Questions (One Question Per Screen) |
|
|
143 | (1) |
|
7.A.2 Validation Questions (One Per Screen) |
|
|
144 | (1) |
|
7.A.3 GfK Profile Questions (Not Part of the Questionnaire) |
|
|
145 | (1) |
|
7.B Test of Interaction Effects |
|
|
145 | (1) |
|
|
146 | (5) |
|
8 Mode Effects Versus Question Format Effects: An Experimental Investigation of Measurement Error Implemented in a Probability-Based Online Panel |
|
|
151 | (16) |
|
|
|
|
|
151 | (1) |
|
8.1.1 Online Surveys Advantages and Challenges |
|
|
151 | (1) |
|
8.1.2 Probability-Based Online Panels |
|
|
152 | (1) |
|
8.1.3 Mode Effects and Measurement |
|
|
152 | (1) |
|
8.2 Experiments and Probability-Based Online Panels |
|
|
153 | (1) |
|
8.2.1 Online Experiments in Probability-Based Panels: Advantages |
|
|
153 | (1) |
|
8.2.2 Examples of Experiments in the LISS Panel |
|
|
154 | (1) |
|
8.3 Mixed-Mode Question Format Experiments |
|
|
154 | (1) |
|
8.3.1 Theoretical Background |
|
|
154 | (1) |
|
|
156 | (1) |
|
|
156 | (1) |
|
8.3.2.2 Experimental Design |
|
|
156 | (1) |
|
|
157 | (1) |
|
8.3.3 Analyses and Results |
|
|
157 | (1) |
|
8.3.3.1 Response and Weighting |
|
|
157 | (1) |
|
|
157 | (1) |
|
|
158 | (3) |
|
8.4 Summary and Discussion |
|
|
161 | (1) |
|
|
162 | (1) |
|
|
162 | (5) |
|
9 Conflicting Cues: Item Nonresponse and Experimental Mortality |
|
|
167 | (14) |
|
|
|
|
167 | (1) |
|
9.2 Survey Experiments and Item Nonresponse |
|
|
167 | (3) |
|
9.3 Case Study: Conflicting Cues and Item Nonresponse |
|
|
170 | (1) |
|
|
170 | (1) |
|
|
171 | (1) |
|
9.6 Experimental Conditions and Measures |
|
|
172 | (1) |
|
|
173 | (1) |
|
9.8 Addressing Item Nonresponse in Survey Experiments |
|
|
174 | (4) |
|
|
178 | (1) |
|
|
179 | (2) |
|
10 Application of a List Experiment at the Population Level: The Case of Opposition to immigration in the Netherlands |
|
|
181 | (16) |
|
|
|
|
|
10.1 Fielding the Item Count Technique (ICT) |
|
|
183 | (2) |
|
10.2 Analyzing the Item Count Technique (ICT) |
|
|
185 | (1) |
|
10.3 An Application of ICT: Attitudes Toward Immigrants in the Netherlands |
|
|
186 | (1) |
|
10.3.1 Data - The Longitudinal Internet Studies for the Social Sciences Panel |
|
|
186 | (1) |
|
10.3.2 Implicit and Explicit Agreement |
|
|
187 | (1) |
|
10.3.3 Social Desirability Bias (SDB) |
|
|
188 | (2) |
|
|
190 | (2) |
|
|
192 | (3) |
Part IV Introduction to Section on Interviewers |
|
195 | (50) |
|
|
|
11 Race- and Ethnicity-of-Interviewer Effects |
|
|
197 | (28) |
|
|
|
|
|
197 | (1) |
|
|
197 | (1) |
|
11.1.2 Theoretical Explanations |
|
|
198 | (1) |
|
11.1.2.1 Survey Participation |
|
|
198 | (1) |
|
11.1.2.2 Survey Responses |
|
|
199 | (1) |
|
|
199 | (1) |
|
11.1.3.1 Survey Participation |
|
|
199 | (1) |
|
11.1.3.2 Survey Responses in Face-to-Face Interviews |
|
|
200 | (1) |
|
11.1.3.3 Survey Responses in Telephone Interviews |
|
|
202 | (1) |
|
11.1.3.4 Survey Responses in Surveys that Are Not Interviewer-Administered |
|
|
204 | (1) |
|
11.1.3.5 Other Race/Ethnicity of Interviewer Research |
|
|
204 | (1) |
|
11.2 The Current Research |
|
|
205 | (2) |
|
11.3 Respondents and Procedures |
|
|
207 | (1) |
|
|
207 | (1) |
|
11.4.1 Interviewer Race/Ethnicity |
|
|
207 | (1) |
|
11.4.2 Sample Frame Variables |
|
|
208 | (1) |
|
11.4.2.1 Disposition Codes |
|
|
208 | (1) |
|
11.4.2.2 Neighborhood Level Variables |
|
|
208 | (1) |
|
11.4.3 Respondent Variables |
|
|
208 | (1) |
|
|
208 | (1) |
|
11.4.3.2 Perceived Race/Ethnicity |
|
|
209 | (1) |
|
|
209 | (1) |
|
|
209 | (1) |
|
|
209 | (1) |
|
11.4.3.6 Years of Education |
|
|
209 | (1) |
|
11.4.3.7 Household Income |
|
|
209 | (1) |
|
11.4.3.8 Political Ideology |
|
|
209 | (1) |
|
11.4.3.9 Affirmative Action |
|
|
209 | (1) |
|
11.4.3.10 Immigration Policy |
|
|
210 | (1) |
|
11.4.3.11 Attitudes Toward Obama |
|
|
210 | (1) |
|
|
210 | (1) |
|
11.5.1 Sample Frame Analyses |
|
|
210 | (1) |
|
11.5.2 Survey Sample Analyses |
|
|
211 | (1) |
|
|
211 | (1) |
|
|
211 | (1) |
|
11.6.1.1 Survey Participation or Response |
|
|
212 | (1) |
|
|
212 | (1) |
|
|
213 | (1) |
|
11.6.2 Perceptions of Interviewer Race |
|
|
214 | (1) |
|
|
215 | (4) |
|
11.7 Discussion and Conclusion |
|
|
219 | (1) |
|
|
220 | (1) |
|
|
221 | (1) |
|
|
221 | (4) |
|
12 Investigating Interviewer Effects and Confounds in Survey-Based Experimentation |
|
|
225 | (22) |
|
|
|
|
12.1 Studying Interviewer Effects Using a Post hoc Experimental Design |
|
|
226 | (1) |
|
12.1.1 An Example of a Post hoc Investigation of Interviewer Effects |
|
|
228 | (2) |
|
12.2 Studying Interviewer Effects Using A priori Experimental Designs |
|
|
230 | (1) |
|
12.2.1 Assignment of Experimental Treatments to Interviewers |
|
|
230 | (2) |
|
12.3 An Original Experiment on the Effects of Interviewers Administering Only One Treatment vs. Interviewers Administrating Multiple Treatments |
|
|
232 | (1) |
|
12.3.1 The Design of Our Survey-Based Experiment |
|
|
232 | (1) |
|
|
233 | (1) |
|
12.3.3 Experimental Design Findings |
|
|
234 | (5) |
|
|
239 | (3) |
|
|
242 | (3) |
Part V Introduction to Section on Adaptive Design |
|
245 | (46) |
|
|
|
13 Using Experiments to Assess Interactive Feedback That Improves Response Quality in Web Surveys |
|
|
247 | (28) |
|
|
|
|
247 | (1) |
|
13.1.1 Response Quality in Web Surveys |
|
|
247 | (1) |
|
13.1.2 Dynamic Interventions to Enhance Response Quality in Web Surveys |
|
|
249 | (2) |
|
13.2 Case Studies - Interactive Feedback in Web Surveys |
|
|
251 | (7) |
|
13.3 Methodological Issues in Experimental Visual Design Studies |
|
|
258 | (1) |
|
13.3.1 Between-Subjects Design |
|
|
258 | (1) |
|
13.3.2 Field Experiments vs. Lab Experiments |
|
|
259 | (1) |
|
13.3.3 Replication of Experiments |
|
|
259 | (1) |
|
13.3.4 Randomization in the Net Sample |
|
|
260 | (1) |
|
13.3.5 Independent Randomization |
|
|
262 | (1) |
|
13.3.6 Effects of Randomization on the Composition of the Experimental Conditions |
|
|
262 | (1) |
|
13.3.7 Differential Unit Nonresponse |
|
|
263 | (1) |
|
13.3.8 Nonresponse Bias and Generalizability |
|
|
264 | (1) |
|
13.3.9 Breakoff as an Indicator of Motivational Confounding |
|
|
264 | (1) |
|
13.3.10 Manipulation Check |
|
|
265 | (1) |
|
13.3.11 A Direction for Future Research: Causal Mechanisms |
|
|
268 | (1) |
|
|
269 | (6) |
|
14 Randomized Experiments for Web-Mail Surveys Conducted Using Address-Based Samples of the General Population |
|
|
275 | (18) |
|
|
|
|
|
|
|
275 | (1) |
|
14.1.1 Potential Advantages of Web-Mail Designs |
|
|
275 | (1) |
|
14.1.2 Design Considerations with Web-Mail Surveys |
|
|
276 | (1) |
|
14.1.3 Empirical Findings from General Population Web-Mail Studies |
|
|
277 | (1) |
|
14.2 Study Design and Methods |
|
|
278 | (1) |
|
14.2.1 Experimental Design |
|
|
278 | (1) |
|
14.2.2 Questionnaire Design |
|
|
279 | (1) |
|
|
280 | (1) |
|
14.2.4 Statistical Methods |
|
|
280 | (1) |
|
|
281 | (4) |
|
|
285 | (2) |
|
|
287 | (4) |
Part VI Introduction to Section on Special Surveys |
|
291 | (36) |
|
|
|
15 Mounting Multiple Experiments on Longitudinal Social Surveys: Design and Implementation Considerations |
|
|
293 | (16) |
|
|
|
15.1 Introduction and Overview |
|
|
293 | (1) |
|
15.2 Types of Experiments that Can Be Mounted in a Longitudinal Survey |
|
|
294 | (1) |
|
15.3 Longitudinal Experiments and Experiments in Longitudinal Surveys |
|
|
295 | (1) |
|
15.4 Longitudinal Surveys that Serve as Platforms for Experimentation |
|
|
296 | (2) |
|
15.5 The Understanding Society Innovation Panel |
|
|
298 | (1) |
|
15.6 Avoiding Confounding of Experiments |
|
|
299 | (2) |
|
15.7 Allocation Procedures |
|
|
301 | (1) |
|
15.7.1 Assignment Within or Between Households and Interviewers? |
|
|
302 | (1) |
|
15.7.2 Switching Treatments Between Waves |
|
|
303 | (1) |
|
|
304 | (1) |
|
|
305 | (1) |
|
15.A Appendix: Stata Syntax to Produce Table 15.3 Treatment Allocations |
|
|
306 | (1) |
|
|
306 | (3) |
|
16 Obstacles and Opportunities for Experiments in Establishment Surveys Supporting Official Statistics |
|
|
309 | (20) |
|
|
|
|
309 | (1) |
|
16.2 Some Key Differences Between Household and Establishment Surveys |
|
|
310 | (1) |
|
|
310 | (1) |
|
16.2.2 Statistical Products |
|
|
310 | (1) |
|
16.2.3 Highly Skewed or Small Specialized Target Populations |
|
|
310 | (1) |
|
16.2.4 Sample Designs and Response Burden |
|
|
311 | (1) |
|
16.2.5 Question Complexity and Availability of Data in Records |
|
|
311 | (1) |
|
16.2.6 Labor-intensive Response Process |
|
|
311 | (1) |
|
16.2.7 Mandatory Reporting |
|
|
312 | (1) |
|
16.2.8 Availability of Administrative and Auxiliary Data |
|
|
312 | (1) |
|
16.3 Existing Literature Featuring Establishment Survey Experiments |
|
|
312 | (1) |
|
16.3.1 Target Populations |
|
|
312 | (1) |
|
|
313 | (1) |
|
16.3.3 Complexity and Context |
|
|
313 | (1) |
|
16.4 Key Considerations for Experimentation in Establishment Surveys |
|
|
314 | (1) |
|
16.4.1 Embed the Experiment Within Production |
|
|
314 | (1) |
|
16.4.2 Conduct Standalone Experiments |
|
|
315 | (1) |
|
16.4.3 Use Nonsample Cases |
|
|
315 | (1) |
|
16.4.4 Exclude Large Establishments |
|
|
315 | (1) |
|
16.4.5 Employ Risk Mitigation Strategies and Alternatives |
|
|
316 | (2) |
|
16.5 Examples of Experimentation in Establishment Surveys |
|
|
318 | (1) |
|
16.5.1 Example 1: Adaptive Design in the Agricultural Resource Management Survey |
|
|
318 | (1) |
|
16.5.1.1 Experimental Strategy: Embed the Experiment Within the Production Survey |
|
|
318 | (1) |
|
16.5.1.2 Experimental Strategy: Exclude Large Units from Experimental Design |
|
|
319 | (1) |
|
16.5.2 Example 2: Questionnaire Testing for the Census of Agriculture |
|
|
319 | (1) |
|
16.5.2.1 Experimental Strategy: Conduct a Standalone Experiment |
|
|
319 | (1) |
|
16.5.2.2 Experimental Strategy: Exclude Large Establishments |
|
|
320 | (1) |
|
16.5.3 Example 3: Testing Contact Strategies for the Economic Census |
|
|
320 | (1) |
|
16.5.3.1 Experimental Strategy: Embed Experiments in Production Surveys |
|
|
321 | (1) |
|
16.5.4 Example 4: Testing Alternative Question Styles for the Economic Census |
|
|
321 | (1) |
|
16.5.4.1 Experimental Strategy: Use Nonproduction Cases Alongside a Production Survey |
|
|
321 | (1) |
|
16.5.5 Example 5: Testing Adaptive Design Alternatives in the Annual Survey of Manufactures |
|
|
322 | (1) |
|
16.5.5.1 Experimental Strategy: Exclude Large Units |
|
|
322 | (1) |
|
16.5.5.2 Experimental Strategy: Embed the Experiment Within the Production Survey |
|
|
322 | (1) |
|
16.6 Discussion and Concluding Remarks |
|
|
323 | (1) |
|
|
324 | (1) |
|
|
324 | (3) |
Part VII Introduction to Section on Trend Data |
|
327 | (42) |
|
|
|
17 Tracking Question-Wording Experiments Across Time in the General Social Survey, 1984-2014 |
|
|
329 | (14) |
|
|
|
|
329 | (1) |
|
17.2 GSS Question-Wording Experiment on Spending Priorities |
|
|
330 | (1) |
|
17.3 Experimental Analysis |
|
|
330 | (8) |
|
17.4 Summary and Conclusion |
|
|
338 | (1) |
|
17.A National Spending Priority Items |
|
|
339 | (1) |
|
|
340 | (3) |
|
18 Survey Experiments and Changes in Question Wording in Repeated Cross-Sectional Surveys |
|
|
343 | (28) |
|
|
|
|
|
|
|
|
|
343 | (1) |
|
|
344 | (1) |
|
18.2.1 Repeated Cross-Sectional Surveys |
|
|
344 | (1) |
|
18.2.2 Reasons to Change Question Wording in Repeated Cross-Sectional Surveys |
|
|
345 | (1) |
|
18.2.2.1 Methodological Advances |
|
|
345 | (1) |
|
18.2.2.2 Changes in Language and Meaning |
|
|
346 | (1) |
|
18.2.3 Current Practices and Expert Insight |
|
|
346 | (1) |
|
|
347 | (1) |
|
|
347 | (1) |
|
18.3.1.1 Description of Question Wording Experiment |
|
|
347 | (1) |
|
|
348 | (1) |
|
|
349 | (1) |
|
|
352 | (1) |
|
|
352 | (1) |
|
18.3.1.6 Implications for Trend Analysis |
|
|
355 | (1) |
|
18.3.2 General Social Survey (GSS) |
|
|
356 | (1) |
|
18.3.2.1 Purpose of Experiment |
|
|
356 | (1) |
|
|
356 | (1) |
|
|
356 | (1) |
|
|
357 | (1) |
|
|
360 | (2) |
|
18.4 Implications and Conclusions |
|
|
362 | (1) |
|
18.4.1 Limitations and Suggestions for Future Research |
|
|
363 | (1) |
|
|
364 | (1) |
|
|
364 | (5) |
Part VIII Vignette Experiments in Surveys |
|
369 | (48) |
|
|
|
19 Are Factorial Survey Experiments Prone to Survey Mode Effects? |
|
|
371 | (22) |
|
|
|
|
|
371 | (1) |
|
19.2 Idea and Scope of Factorial Survey Experiments |
|
|
372 | (1) |
|
|
373 | (1) |
|
19.3.1 Typical Respondent Samples and Modes in Factorial Survey Experiments |
|
|
373 | (1) |
|
19.3.2 Design Features and Mode Effects |
|
|
374 | (1) |
|
19.3.2.1 Selection of Dimensions and Levels |
|
|
374 | (1) |
|
19.3.2.2 Generation of Questionnaire Versions and Allocation to Respondents |
|
|
375 | (1) |
|
19.3.2.3 Number of Vignettes per Respondent and Response Scales |
|
|
376 | (1) |
|
19.3.2.4 Interviewer Effects and Social Desirability Bias |
|
|
377 | (1) |
|
19.3.3 Summing Up: Mode Effects |
|
|
378 | (1) |
|
|
378 | (1) |
|
|
378 | (1) |
|
|
378 | (1) |
|
19.4.1.2 Analysis Techniques |
|
|
380 | (1) |
|
19.4.2 Mode Effects Regarding Nonresponse and Measurement Errors |
|
|
380 | (1) |
|
|
380 | (1) |
|
19.4.2.2 Response Quality |
|
|
382 | (3) |
|
19.4.3 Do Data Collection Mode Effects Impact Substantive Results? |
|
|
385 | (1) |
|
19.4.3.1 Point Estimates and Significance Levels |
|
|
385 | (1) |
|
19.4.3.2 Measurement Efficiency and Sources of Unexplained Variance |
|
|
387 | (1) |
|
|
388 | (2) |
|
|
390 | (3) |
|
20 Validity Aspects of Vignette Experiments: Expected "What-If" Differences Between Reports of Behavioral Intentions and Actual Behavior |
|
|
393 | (26) |
|
|
|
20.1 Outline of the Problem |
|
|
393 | (1) |
|
|
393 | (1) |
|
20.1.2 Vignettes, the Total Survey Error Framework, and Cognitive Response Process Perspectives |
|
|
394 | (1) |
|
|
395 | (1) |
|
20.1.4 Research Strategy for Evaluating the External Validity of Vignette Experiments |
|
|
397 | (2) |
|
20.2 Research Findings from Our Experimental Work |
|
|
399 | (1) |
|
20.2.1 Experimental Design for Both Parts of the Study |
|
|
399 | (1) |
|
20.2.2 Data Collection for Both Parts of the Study |
|
|
400 | (1) |
|
20.2.2.1 Field Experiment |
|
|
400 | (1) |
|
20.2.2.2 Vignette Experiment |
|
|
402 | (3) |
|
20.2.3 Results for Each Part of the Study |
|
|
405 | (1) |
|
20.2.3.1 Field Experiment |
|
|
405 | (1) |
|
20.2.3.2 Vignette Experiment |
|
|
405 | (1) |
|
20.2.4 Systematic Comparison of the Two Experiments |
|
|
406 | (1) |
|
20.2.4.1 Assumptions and Precautions About the Comparability of Settings, Treatments, and Outcomes |
|
|
406 | (1) |
|
20.2.4.2 Assumptions and Precautions About the Comparability of Units |
|
|
407 | (1) |
|
|
408 | (3) |
|
|
411 | (2) |
|
|
413 | (4) |
Part IX Introduction to Section on Analysis |
|
417 | (84) |
|
|
|
21 Identities and intersectionality: A Case for Purposive Sampling in Survey-Experimental Research |
|
|
419 | (16) |
|
|
|
|
419 | (1) |
|
21.2 Common Techniques for Survey Experiments on Identity |
|
|
420 | (1) |
|
|
422 | (1) |
|
21.2.2 The Question of Representativeness: An Answer in Replication |
|
|
423 | (1) |
|
21.2.3 Response Biases and Determination of Eligibility |
|
|
425 | (1) |
|
21.3 How Limited Are Representative Samples for Intersectionality Research? |
|
|
426 | (1) |
|
|
426 | (1) |
|
21.3.2 Example from a Community Association-Based LGBTQ Sample |
|
|
427 | (1) |
|
21.3.3 Example from Hispanic Women Sample in Tucson, AZ |
|
|
430 | (1) |
|
21.4 Conclusions and Discussion |
|
|
430 | (1) |
|
|
431 | (1) |
|
|
431 | (4) |
|
22 Designing Probability Samples to Study Treatment Effect Heterogeneity |
|
|
435 | (22) |
|
|
|
|
|
|
435 | (1) |
|
22.1.1 Probability Samples Facilitate Estimation of Average Treatment Effects |
|
|
437 | (1) |
|
22.1.2 Treatment Effect Heterogeneity in Experiments |
|
|
438 | (1) |
|
22.1.3 Estimation of Subgroup Treatment Effects |
|
|
440 | (1) |
|
22.1.3.1 Problems with Sample Selection Bias |
|
|
440 | (1) |
|
22.1.3.2 Problems with Small Sample Sizes |
|
|
440 | (1) |
|
22.1.4 Moderator Analyses for Understanding Causal Mechanisms |
|
|
441 | (1) |
|
22.1.4.1 Problems of Confounder Bias in Probability Samples |
|
|
442 | (1) |
|
22.1.4.2 Additional Confounder Bias Problems with Nonprobability Samples |
|
|
442 | (1) |
|
22.1.4.3 Problems with Statistical Power |
|
|
443 | |
|
22.1.5 Stratification for Studying Heterogeneity |
|
|
411 | (1) |
|
|
444 | (1) |
|
22.1.5.2 Operationalizing Moderators |
|
|
445 | (1) |
|
22.1.5.3 Orthogonalizing Moderators |
|
|
445 | (1) |
|
22.1.5.4 Stratum Creation and Allocation |
|
|
445 | (1) |
|
22.2 Nesting a Randomized Treatment in a National Probability Sample: The NSLM |
|
|
446 | (1) |
|
22.2.1 Design of the NSLM |
|
|
446 | (1) |
|
22.2.1.1 Population Frame |
|
|
446 | (1) |
|
22.2.1.2 Two-Stage Design |
|
|
447 | (1) |
|
|
447 | (1) |
|
22.2.1.4 Experimental Design |
|
|
447 | (1) |
|
|
447 | (1) |
|
|
447 | (1) |
|
22.2.2 Developing a Theory of Treatment Effect Heterogeneity |
|
|
447 | (1) |
|
|
447 | (1) |
|
22.2.2.2 Potential Moderators |
|
|
447 | (1) |
|
22.2.2.3 Operationalizing School Achievement Level |
|
|
448 | (1) |
|
22.2.2.4 Orthogonalizing for Minority Composition |
|
|
449 | (1) |
|
22.2.3 Developing the Final Design |
|
|
449 | (1) |
|
22.2.3.1 Stratum Creation |
|
|
449 | (1) |
|
22.2.3.2 Stratum Allocation |
|
|
449 | (1) |
|
22.2.3.3 Implementing Sample Selection |
|
|
451 | (1) |
|
|
451 | (1) |
|
22.3 Discussion and Conclusions |
|
|
451 | (1) |
|
|
452 | (1) |
|
22.3.2 Contextual Effects Matter |
|
|
452 | (1) |
|
22.3.3 Plan for Moderators |
|
|
452 | (1) |
|
|
452 | (1) |
|
|
453 | (1) |
|
|
453 | (1) |
|
|
453 | (4) |
|
23 Design-Based Analysis of Experiments Embedded in Probability Samples |
|
|
457 | (24) |
|
|
|
457 | (1) |
|
23.2 Design of Embedded Experiments |
|
|
458 | (2) |
|
23.3 Design-Based Inference for Embedded Experiments with One Treatment Factor |
|
|
460 | (1) |
|
23.3.1 Measurement Error Model and Hypothesis Testing |
|
|
460 | (1) |
|
23.3.2 Point and Variance Estimation |
|
|
462 | (1) |
|
|
464 | (1) |
|
|
464 | (1) |
|
23.3.5 Hypotheses About Ratios and Totals |
|
|
465 | (1) |
|
23.4 Analysis of Experiments with Clusters of Sampling Units as Experimental Units |
|
|
466 | (2) |
|
|
468 | (1) |
|
23.5.1 Designing Embedded K x L Factorial Designs |
|
|
468 | (1) |
|
23.5.2 Testing Hypotheses About Main Effects and Interactions in K x L Embedded Factorial Designs |
|
|
469 | (1) |
|
|
471 | (1) |
|
|
471 | (1) |
|
23.6 A Mixed-Mode Experiment in the Dutch Crime Victimization Survey |
|
|
472 | (1) |
|
|
472 | (1) |
|
23.6.2 Survey Design and Experimental Design |
|
|
472 | (1) |
|
|
474 | (1) |
|
|
474 | (3) |
|
|
477 | (1) |
|
|
478 | (1) |
|
|
478 | (3) |
|
24 Extending the Within-Persons Experimental Design: The Multitrait-Multierror (MTME) Approach |
|
|
481 | (20) |
|
|
|
|
481 | (1) |
|
24.2 The Multitrait-Multierror (MTME) Framework |
|
|
482 | (1) |
|
24.2.1 A Simple but Defective Design: The Interview-Reinterview from the MTME Perspective |
|
|
483 | (1) |
|
24.2.2 Designs Estimating Stochastic Survey Errors |
|
|
485 | (2) |
|
24.3 Designing the MTME Experiment |
|
|
487 | (1) |
|
24.3.1 What are the main types of measurement errors that should be estimated? |
|
|
487 | (1) |
|
24.3.2 How can the questions be manipulated in order to estimate these types of error? |
|
|
487 | (1) |
|
24.3.3 Is it possible to manipulate the form pair order? |
|
|
488 | (1) |
|
24.3.4 Is there enough power to estimate the model? |
|
|
488 | (1) |
|
24.3.5 How can data collection minimize memory effects? |
|
|
489 | (1) |
|
24.4 Statistical Estimation for the MTME Approach |
|
|
489 | (2) |
|
24.5 Measurement Error in Attitudes Toward Migrants in the UK |
|
|
491 | (1) |
|
24.5.1 Estimating Four Stochastic Error Variances Using MTME |
|
|
491 | (1) |
|
24.5.1.1 What are the main types of measurement errors that should be estimated? |
|
|
491 | (1) |
|
24.5.1.2 How can the questions be manipulated in order to estimate these types of errors? |
|
|
492 | (1) |
|
24.5.1.3 Is it possible to manipulate the form pair order? |
|
|
492 | (1) |
|
24.5.1.4 Is there enough power to estimate the model? |
|
|
492 | (1) |
|
24.5.1.5 How can data collection minimize memory effects? |
|
|
493 | (1) |
|
24.5.2 Estimating MTME with four stochastic errors |
|
|
493 | (1) |
|
|
494 | (3) |
|
24.7 Conclusions and Future Research Directions |
|
|
497 | (1) |
|
|
498 | (1) |
|
|
498 | (3) |
Index |
|
501 | |