Acknowledgments |
|
xiii | |
About the Author |
|
xv | |
Preface |
|
xvii | |
Prologue |
|
xxi | |
|
1 Health Program Evaluation: Is It Worth it? |
|
|
1 | (16) |
|
Growth of Health Program Evaluation |
|
|
4 | (4) |
|
Types of Health Program Evaluation |
|
|
8 | (7) |
|
Evaluation of Health Programs |
|
|
8 | (1) |
|
Evaluation of Health Systems |
|
|
9 | (6) |
|
|
15 | (1) |
|
|
15 | (1) |
|
|
15 | (2) |
|
2 The Evaluation Process as a Three-Act Play |
|
|
17 | (28) |
|
Evaluation as a Three-Act Play |
|
|
19 | (14) |
|
Act I Asking the Questions |
|
|
19 | (8) |
|
Act II Answering the Questions |
|
|
27 | (4) |
|
Act III Using the Answers in Decision Making |
|
|
31 | (2) |
|
|
33 | (4) |
|
Evaluation in a Cultural Context |
|
|
37 | (3) |
|
|
40 | (1) |
|
|
41 | (3) |
|
|
44 | (1) |
|
|
44 | (1) |
|
|
44 | (1) |
|
ACT I ASKING THE QUESTIONS |
|
|
45 | (34) |
|
3 Developing Evaluation Questions |
|
|
47 | (32) |
|
Step 1 Specify Program Theory |
|
|
48 | (18) |
|
Conceptual Models: Theory of Cause and Effect |
|
|
51 | (4) |
|
Conceptual Models: Theory of Implementation |
|
|
55 | (11) |
|
Step 2 Specify Program Objectives |
|
|
66 | (3) |
|
Step 3 Translate Program Theory and Objectives Into Evaluation Questions |
|
|
69 | (4) |
|
Step 4 Select Key Questions |
|
|
73 | (4) |
|
|
73 | (1) |
|
|
73 | (1) |
|
|
73 | (1) |
|
|
74 | (1) |
|
|
75 | (1) |
|
|
75 | (1) |
|
|
76 | (1) |
|
Evaluation Theory and Practice |
|
|
76 | (1) |
|
|
77 | (1) |
|
|
78 | (1) |
|
|
78 | (1) |
|
|
78 | (1) |
|
ACT II ANSWERING THE QUESTIONS |
|
|
79 | (98) |
|
Scene 1 Developing the Evaluation Design to Answer the Questions |
|
|
79 | (2) |
|
4 Evaluation of Program Impacts |
|
|
81 | (54) |
|
Quasi-Experimental Study Designs |
|
|
85 | (24) |
|
One-Group Posttest-Only Design |
|
|
85 | (1) |
|
One-Group Pretest-Posttest Design |
|
|
86 | (2) |
|
Posttest-Only Comparison Group Design |
|
|
88 | (2) |
|
Recurrent Institutional Cycle ("Patched-Up") Design |
|
|
90 | (2) |
|
Pretest-Posttest Nonequivalent Comparison Group Design |
|
|
92 | (2) |
|
Single Time-Series Design |
|
|
94 | (3) |
|
Repeated Treatment Design |
|
|
97 | (1) |
|
Multiple Time-Series Design |
|
|
98 | (2) |
|
Regression Discontinuity Design |
|
|
100 | (5) |
|
Summary: Quasi-Experimental Study Designs and Internal Validity |
|
|
105 | (4) |
|
Counterfactuals and Experimental Study Designs |
|
|
109 | (10) |
|
Counterfactuals and Causal Inference |
|
|
109 | (4) |
|
Prestest-Posttest Control Group Design |
|
|
113 | (1) |
|
Posttest-Only Control Group Design |
|
|
114 | (1) |
|
Solomon Four-Group Design |
|
|
115 | (1) |
|
Randomized Study Designs for Population-Based Interventions |
|
|
116 | (1) |
|
|
117 | (1) |
|
|
118 | (1) |
|
Statistical Threats to Validity |
|
|
119 | (3) |
|
Generalizability of Impact Evaluation Results |
|
|
122 | (8) |
|
|
122 | (5) |
|
|
127 | (2) |
|
|
129 | (1) |
|
Evaluation of Impact Designs and Meta-Analysis |
|
|
130 | (2) |
|
|
132 | (1) |
|
|
133 | (1) |
|
|
134 | (1) |
|
5 Cost-Effectiveness Analysis |
|
|
135 | (20) |
|
Cost-Effectiveness Analysis: An Aid to Decision Making |
|
|
136 | (2) |
|
Comparing Program Costs and Effects: The Cost-Effectiveness Ratio |
|
|
138 | (2) |
|
Types of Cost-Effectiveness Analysis |
|
|
140 | (4) |
|
Cost-Effectiveness Studies of Health Programs |
|
|
140 | (2) |
|
Cost-Effectiveness Evaluations of Health Services |
|
|
142 | (2) |
|
Steps in Conducting a Cost-Effectiveness Analysis |
|
|
144 | (7) |
|
Steps 1--4 Organizing the CEA |
|
|
144 | (2) |
|
Step 5 Identifying, Measuring, and Valuing Costs |
|
|
146 | (1) |
|
Step 6 Identifying and Measuring Effectiveness |
|
|
147 | (1) |
|
Step 7 Discounting Future Costs and Effectiveness |
|
|
148 | (2) |
|
Step 8 Conducting a Sensitivity Analysis |
|
|
150 | (1) |
|
Step 9 Addressing Equity Issues |
|
|
151 | (1) |
|
Steps 10 and 11 Using CEA Results in Decision Making |
|
|
151 | (1) |
|
Evaluation of Program Effects and Costs |
|
|
151 | (1) |
|
|
152 | (1) |
|
|
153 | (1) |
|
|
153 | (2) |
|
6 Evaluation of Program Implementation |
|
|
155 | (22) |
|
Types of Evaluation Designs for Answering Implementation Questions |
|
|
157 | (8) |
|
Quantitative and Qualitative Methods |
|
|
158 | (2) |
|
Multiple and Mixed Methods Designs |
|
|
160 | (5) |
|
Types of Implementation Questions and Designs for Answering Them |
|
|
165 | (11) |
|
Monitoring Program Implementation |
|
|
165 | (2) |
|
Explaining Program Outcomes |
|
|
167 | (9) |
|
|
176 | (1) |
|
|
176 | (1) |
|
|
176 | (1) |
|
ACT II ANSWERING THE QUESTIONS |
|
|
177 | (88) |
|
Scenes 2 and 3 Developing the Methods to Carry Out the Design and Conducting the Evaluation |
|
|
177 | (6) |
|
7 Population and Sampling |
|
|
183 | (30) |
|
Step 1 Identify the Target Populations of the Evaluation |
|
|
184 | (1) |
|
Step 2 Identify the Eligible Members of Each Target Population |
|
|
185 | (4) |
|
Step 3 Decide Whether Probability or Nonprobability Sampling Is Necessary |
|
|
189 | (2) |
|
Step 4 Choose a Nonprobability Sampling Design for Answering an Evaluation Question |
|
|
191 | (2) |
|
Step 5 Choose a Probability Sampling Design for Answering an Evaluation Question |
|
|
193 | (5) |
|
Simple and Systematic Random Sampling |
|
|
193 | (1) |
|
Proportionate Stratified Sampling |
|
|
194 | (1) |
|
Disproportionate Stratified Sampling |
|
|
195 | (1) |
|
Post-Stratification Sampling |
|
|
196 | (1) |
|
|
197 | (1) |
|
Step 6 Determine Minimum Sample Size Requirements |
|
|
198 | (12) |
|
Sample Size in Qualitative Evaluations |
|
|
198 | (1) |
|
Types of Sample Size Calculations |
|
|
199 | (1) |
|
Sample Size Calculations for Descriptive Questions |
|
|
200 | (2) |
|
Sample Size Calculations for Comparative Questions |
|
|
202 | (8) |
|
|
210 | (1) |
|
|
210 | (1) |
|
|
211 | (1) |
|
|
211 | (2) |
|
8 Measurement and Data Collection |
|
|
213 | (34) |
|
Measurement and Data Collection in Quantitative Evaluations |
|
|
215 | (21) |
|
The Basics of Measurement and Classification |
|
|
215 | (2) |
|
Step 1 Decide What Concepts to Measure |
|
|
217 | (1) |
|
Step 2 Identify Measures of the Concepts |
|
|
218 | (2) |
|
Step 3 Assess the Reliability, Validity, and Responsiveness of the Measures |
|
|
220 | (9) |
|
Step 4 Identify and Assess the Data Source of Each Measure |
|
|
229 | (5) |
|
Step 5 Choose the Measures |
|
|
234 | (1) |
|
Step 6 Organize the Measures for Data Collection and Analysis |
|
|
234 | (1) |
|
Step 7 Collect the Measures |
|
|
235 | (1) |
|
Data Collection in Qualitative Evaluations |
|
|
236 | (3) |
|
Reliability and Validity in Qualitative Evaluations |
|
|
239 | (2) |
|
Management of Data Collection |
|
|
241 | (1) |
|
|
242 | (1) |
|
|
242 | (2) |
|
|
244 | (1) |
|
|
245 | (2) |
|
|
247 | (18) |
|
Getting Started: What's the Question? |
|
|
248 | (2) |
|
Qualitative Data Analysis |
|
|
250 | (2) |
|
Quantitative Data Analysis |
|
|
252 | (11) |
|
What Are the Variables for Answering Each Question? |
|
|
252 | (2) |
|
How Should the Variables Be Analyzed? |
|
|
254 | (9) |
|
|
263 | (1) |
|
|
263 | (1) |
|
|
264 | (1) |
|
ACT III USING THE ANSWERS IN DECISION MAKING |
|
|
265 | (54) |
|
10 Disseminating the Answers to Evaluation Questions |
|
|
267 | (22) |
|
Scene 1 Translating Evaluation Answers Back Into Policy Language |
|
|
268 | (5) |
|
|
268 | (1) |
|
|
269 | (1) |
|
Developing Recommendations |
|
|
270 | (3) |
|
Scene 2 Developing a Dissemination Plan for Evaluation Answers |
|
|
273 | (9) |
|
Target Audience and Type of Information |
|
|
275 | (1) |
|
|
275 | (5) |
|
Timing of the Information |
|
|
280 | (1) |
|
|
281 | (1) |
|
Scene 3 Using the Answers in Decision Making and the Policy Cycle |
|
|
282 | (1) |
|
How Answers Are Used by Decision Makers |
|
|
282 | (5) |
|
Increasing the Use of Answers in the Evaluation Process |
|
|
284 | (2) |
|
|
286 | (1) |
|
|
287 | (1) |
|
|
287 | (1) |
|
|
287 | (2) |
|
|
289 | (30) |
|
|
291 | (4) |
|
|
295 | (24) |
Index |
|
319 | |