|
|
xv | |
|
|
xvi | |
Preface |
|
xvii | |
Acknowledgments |
|
xxi | |
|
|
1 | (42) |
|
1 Introduction and Overview |
|
|
3 | (20) |
|
|
4 | (1) |
|
|
4 | (1) |
|
1.3 Some Programs are Ineffective or Harmful |
|
|
5 | (2) |
|
Critical Incident Stress Debriefing |
|
|
5 | (1) |
|
|
6 | (1) |
|
1.4 Historical Overview of Program Evaluation |
|
|
7 | (1) |
|
1.5 Evidence-Informed Practice |
|
|
8 | (1) |
|
1.6 Philosophical Issues: What Makes Some Types of Evidence Better Than Other Types? |
|
|
9 | (3) |
|
|
9 | (1) |
|
|
10 | (1) |
|
|
10 | (1) |
|
|
10 | (2) |
|
1.7 Qualitative versus Quantitative Evaluations: A False Dichotomy |
|
|
12 | (1) |
|
|
13 | (1) |
|
1.9 Different Evaluation Purposes |
|
|
14 | (1) |
|
|
15 | (5) |
|
|
15 | (1) |
|
|
16 | (1) |
|
|
16 | (2) |
|
Performance Measurement Systems |
|
|
18 | (1) |
|
Evaluating One's Own Practice |
|
|
19 | (1) |
|
|
19 | (1) |
|
|
20 | (2) |
|
|
22 | (1) |
|
|
22 | (1) |
|
2 Ethical and Cultural Issues in Program Evaluation |
|
|
23 | (20) |
|
|
24 | (1) |
|
|
24 | (3) |
|
2.3 Institutional Review Boards (IRBs) |
|
|
27 | (5) |
|
2.4 Culturally Sensitive Program Evaluation |
|
|
32 | (6) |
|
|
32 | (2) |
|
|
34 | (1) |
|
|
35 | (1) |
|
2.4.4 Analyzing and Interpreting Evaluation Findings |
|
|
36 | (1) |
|
2.4.5 Measurement Equivalence |
|
|
36 | (2) |
|
2.5 Developing Cultural Competence |
|
|
38 | (2) |
|
2.5.1 Acculturation and Immigration |
|
|
39 | (1) |
|
2.5.2 Subgroup Differences |
|
|
39 | (1) |
|
2.5.3 Culturally Sensitive Data Analysis and Interpretation |
|
|
40 | (1) |
|
|
40 | (1) |
|
|
41 | (1) |
|
|
42 | (1) |
|
PART II QUANTITATIVE AND QUALITATIVE METHODS FOR FORMATIVE AND PROCESS EVALUATIONS |
|
|
43 | (42) |
|
|
45 | (18) |
|
|
46 | (3) |
|
3.2 Defining Needs: Normative Need versus Felt Need |
|
|
49 | (2) |
|
3.3 Felt Need versus Service Utilization |
|
|
51 | (1) |
|
3.4 Needs Assessment Approaches |
|
|
51 | (9) |
|
|
52 | (1) |
|
|
52 | (1) |
|
3.4.2 Rates under Treatment |
|
|
53 | (1) |
|
|
53 | (1) |
|
|
53 | (1) |
|
|
54 | (1) |
|
|
54 | (1) |
|
|
55 | (1) |
|
|
55 | (1) |
|
How to Conduct a Focus Group |
|
|
56 | (1) |
|
Types and Sequence of Focus Group Questions |
|
|
57 | (2) |
|
|
59 | (1) |
|
|
59 | (1) |
|
|
59 | (1) |
|
|
60 | (1) |
|
|
61 | (1) |
|
|
62 | (1) |
|
|
62 | (1) |
|
4 Survey Methods for Program Planning and Monitoring |
|
|
63 | (22) |
|
|
64 | (1) |
|
4.2 Samples, Populations, and Representativeness |
|
|
64 | (6) |
|
|
65 | (1) |
|
|
65 | (1) |
|
|
65 | (2) |
|
|
67 | (1) |
|
Maximizing Response Rates |
|
|
68 | (1) |
|
|
69 | (1) |
|
4.3 Recruiting Hard-to-Reach Populations |
|
|
70 | (1) |
|
Tactics for Reaching and Recruiting Millennial |
|
|
71 | (1) |
|
|
71 | (1) |
|
|
71 | (3) |
|
|
72 | (1) |
|
|
72 | (1) |
|
|
73 | (1) |
|
|
73 | (1) |
|
|
73 | (1) |
|
|
73 | (1) |
|
|
74 | (2) |
|
4.7 Client Satisfaction Surveys |
|
|
76 | (2) |
|
|
77 | (1) |
|
4.8 Survey Questionnaire Construction |
|
|
78 | (3) |
|
4.8.1 Guidelines for Item Wording |
|
|
78 | (2) |
|
4.8.2 Guidelines for Questionnaire Format |
|
|
80 | (1) |
|
4.9 Online Survey Questionnaire Preparation |
|
|
81 | (1) |
|
|
81 | (1) |
|
|
82 | (1) |
|
|
83 | (2) |
|
PART III EVALUATING OUTCOME IN SERVICE-ORIENTED AGENCIES |
|
|
85 | (88) |
|
5 Selecting and Measuring Outcome Objectives |
|
|
87 | (22) |
|
|
88 | (1) |
|
5.2 Mission and Vision Statements |
|
|
88 | (1) |
|
|
89 | (2) |
|
|
91 | (1) |
|
|
92 | (1) |
|
5.6 How to Write Good Program Outcome Objectives |
|
|
93 | (1) |
|
5.7 Operationally Defining Objectives |
|
|
94 | (3) |
|
|
95 | (1) |
|
|
95 | (1) |
|
|
96 | (1) |
|
5.8 How to Find and Select the Best Self-Report Outcome Measures |
|
|
97 | (1) |
|
5.9 Criteria for Selecting a Self-Report Outcome Measure |
|
|
98 | (7) |
|
|
99 | (1) |
|
|
99 | (1) |
|
|
99 | (1) |
|
|
100 | (1) |
|
|
101 | (4) |
|
|
105 | (2) |
|
|
107 | (1) |
|
|
108 | (1) |
|
6 Inference and Logic in Pragmatic Outcome Evaluation: Don't Let the Perfect Become the Enemy of the Good |
|
|
109 | (13) |
|
|
110 | (1) |
|
6.2 Causality Criteria Revisited |
|
|
111 | (2) |
|
|
111 | (1) |
|
|
112 | (1) |
|
6.2.3 Ruling Out Alternative Explanations |
|
|
112 | (1) |
|
6.3 Implications of Evidence-Informed Practice and Critical Thinking |
|
|
113 | (1) |
|
|
114 | (2) |
|
6.5 A Successful Evaluator Is a Pragmatic Evaluator |
|
|
116 | (3) |
|
6.6 Degree of Certainty Needed |
|
|
119 | (1) |
|
|
119 | (1) |
|
|
120 | (1) |
|
|
121 | (1) |
|
7 Feasible Outcome Evaluation Designs |
|
|
122 | (15) |
|
|
123 | (1) |
|
7.2 Descriptive Outcome Evaluations |
|
|
123 | (1) |
|
7.3 One-Group Pretest-Posttest Designs |
|
|
124 | (1) |
|
|
125 | (3) |
|
7.4.1 Between-Group Effect Sizes |
|
|
125 | (1) |
|
7.4.2 Within-Group Effect Sizes |
|
|
126 | (2) |
|
7.5 Non-equivalent Comparison Groups Designs |
|
|
128 | (1) |
|
|
129 | (1) |
|
7.7 Switching Replication Design |
|
|
130 | (1) |
|
7.8 Switching Replication Design Compared with Waitlist Quasi-experimental Design |
|
|
131 | (1) |
|
|
132 | (1) |
|
7.10 Choosing the Most Appropriate Design |
|
|
133 | (1) |
|
|
134 | (1) |
|
|
135 | (1) |
|
|
136 | (1) |
|
8 Single-Case Designs for Evaluating Programs and Practice |
|
|
137 | (16) |
|
|
138 | (1) |
|
8.2 What Is a Single Case? |
|
|
138 | (1) |
|
8.3 Overview of Single-Case Design Logic for Making Causal Inferences |
|
|
139 | (2) |
|
|
139 | (2) |
|
8.4 What to Measure and by Whom |
|
|
141 | (2) |
|
8.4.1 Obtrusive and Unobtrusive Observation |
|
|
142 | (1) |
|
8.4.2 Quantification Options |
|
|
143 | (1) |
|
|
143 | (1) |
|
|
143 | (1) |
|
8.6 Alternative Single-Case Designs |
|
|
144 | (4) |
|
|
144 | (1) |
|
|
144 | (2) |
|
8.6.3 The Multiple-Baseline Design |
|
|
146 | (1) |
|
8.6.4 Multiple-Component Designs |
|
|
147 | (1) |
|
8.7 B Designs to Evaluate the Implementation of Evidence-Supported Interventions |
|
|
148 | (1) |
|
8.8 Using Single-Case Designs as Part of the Evidence-Informed Practice Process |
|
|
149 | (1) |
|
8.9 Aggregating Single-Case Design Outcomes to Evaluate an Agency |
|
|
149 | (1) |
|
|
150 | (1) |
|
|
151 | (1) |
|
|
152 | (1) |
|
9 Practical and Political Pitfalls in Outcome Evaluations |
|
|
153 | (20) |
|
|
154 | (1) |
|
|
154 | (6) |
|
9.2.1 Intervention Fidelity |
|
|
155 | (1) |
|
9.2.2 Contamination of the Case Assignment Protocol |
|
|
156 | (1) |
|
9.2.3 Recruiting Participants |
|
|
157 | (1) |
|
9.2.4 Retaining Participants |
|
|
158 | (2) |
|
9.3 Engage Agency Staff Meaningfully in Planning the Evaluation |
|
|
160 | (1) |
|
9.4 Fostering Staff Compliance with the Evaluation Protocol Goes On and On |
|
|
160 | (3) |
|
|
163 | (6) |
|
9.5.1 In-House versus External Evaluators |
|
|
164 | (5) |
|
|
169 | (1) |
|
|
169 | (2) |
|
|
171 | (1) |
|
|
172 | (1) |
|
PART IV ANALYZING AND PRESENTING DATA |
|
|
173 | (55) |
|
10 Analyzing and Presenting Data from Formative and Process Evaluations |
|
|
175 | (11) |
|
|
176 | (1) |
|
10.2 Quantitative and Qualitative Data Analyses: Distinctions and Compatibility |
|
|
176 | (1) |
|
10.3 Descriptive Statistics |
|
|
177 | (5) |
|
10.3.1 Frequency Distribution Tables and Charts |
|
|
178 | (2) |
|
|
180 | (1) |
|
|
181 | (1) |
|
10.3.4 The Influence of Outliers |
|
|
181 | (1) |
|
10.4 Analyzing Qualitative Data |
|
|
182 | (1) |
|
|
182 | (1) |
|
|
183 | (2) |
|
|
185 | (1) |
|
|
185 | (1) |
|
11 Analyzing Data from Outcome Evaluations |
|
|
186 | (24) |
|
|
187 | (1) |
|
11.2 Inferential Statistics |
|
|
187 | (4) |
|
11.2.1 P Values and Significance Levels |
|
|
189 | (1) |
|
|
190 | (1) |
|
11.3 Mistakes to Avoid When Interpreting Inferential Statistics |
|
|
191 | (3) |
|
11.3.1 Overreliance on Statistical Significance |
|
|
191 | (1) |
|
11.3.2 Disregarding Sample Size (Statistical Power) |
|
|
191 | (2) |
|
11.3.3 Disregarding Effect Sizes |
|
|
193 | (1) |
|
11.4 Calculating and Interpreting Effect Sizes |
|
|
194 | (7) |
|
11.4.1 Within-Group Effect Sizes |
|
|
195 | (3) |
|
11.4.2 Between-Group Effect Sizes |
|
|
198 | (1) |
|
11.4.3 Why Divide by the Standard Deviation? |
|
|
198 | (1) |
|
|
199 | (1) |
|
11.4.5 Odds Ratios and Risk Ratios |
|
|
200 | (1) |
|
|
200 | (1) |
|
|
200 | (1) |
|
11.5 Overlooking Substantive (Practical) Significance |
|
|
201 | (1) |
|
11.6 Cost-Effectiveness and Cost-Benefit Analyses: Evaluating Efficiency |
|
|
202 | (3) |
|
11.7 Qualitative Data Analysis |
|
|
205 | (1) |
|
|
206 | (2) |
|
|
208 | (1) |
|
|
209 | (1) |
|
12 Writing and Disseminating Evaluation Reports |
|
|
210 | (18) |
|
|
211 | (1) |
|
12.2 Tailor to Your Audience |
|
|
211 | (1) |
|
12.3 Writing Style and Format |
|
|
212 | (1) |
|
12.4 Involve Key Stakeholders |
|
|
212 | (1) |
|
|
213 | (1) |
|
|
213 | (10) |
|
|
214 | (2) |
|
12.6.2 Introduction and Literature Review |
|
|
216 | (1) |
|
|
216 | (1) |
|
12.6.4 Results (Findings) |
|
|
217 | (1) |
|
|
217 | (2) |
|
|
219 | (2) |
|
Discussing Negative Findings |
|
|
221 | (1) |
|
What If Parts of the Evaluation Could Not Be Completed? |
|
|
221 | (1) |
|
|
222 | (1) |
|
|
223 | (1) |
|
12.7 Summary of Mistakes to Avoid |
|
|
223 | (1) |
|
|
224 | (1) |
|
|
225 | (1) |
|
|
226 | (1) |
|
|
227 | (1) |
Epilogue: More Tips for Becoming a Successful Evaluator |
|
228 | (1) |
Planning the Evaluation |
|
228 | (1) |
Levels of Stakeholder Participation |
|
229 | (1) |
Obtain Feedback to a Written Draft of the Evaluation Protocol |
|
229 | (1) |
During Implementation of the Evaluation |
|
229 | (1) |
At the Conclusion of the Evaluation |
|
230 | (1) |
People Skills |
|
231 | (1) |
Show Genuine Interest in Others |
|
231 | (1) |
Try to Be Humorous |
|
231 | (1) |
Be Self-Assured |
|
232 | (1) |
Show Genuine Empathy |
|
232 | (1) |
Active Listening |
|
232 | (2) |
References |
|
234 | (5) |
Index |
|
239 | |