Muutke küpsiste eelistusi

E-raamat: Program Evaluation: Pragmatic Methods for Social Work and Human Service Agencies

(University of Houston)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 23-Jul-2020
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781108872546
  • Formaat - PDF+DRM
  • Hind: 51,86 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 23-Jul-2020
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781108872546

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This textbook provides students and instructors with a pragmatic introduction to program evaluation that emphasizes the activities students are likely to conduct in their future roles in service-oriented agencies.

Be prepared for your future role in a service-oriented agency. This textbook provides practical guidance on program evaluation while avoiding replicating other course material. Drawing on over 40 years of subject knowledge, Allen Rubin describes outcome designs that are feasible for service-oriented agencies and that match the degree of certainty needed by key users of outcome evaluations. The utility and easy calculation of within-group effect sizes are outlined, which enhance the value of evaluations that lack control groups. Instructions are also given on how to write and disseminate an evaluation report in a way maximizes its chances of being used. Conducting focus group interviews and capitalising on the value of non-probabilitysamples will become second nature after following the effective and pragmatic advice mapped out chapter-by-chapter.

Arvustused

'This excellent book on program evaluation is masterfully tailored for students studying social work and social workers themselves. It is comprehensive, clear, and unique to this field. It should be essential reading by members of our profession.' Haluk Soydan, retired Professor of Social Work, University of Southern California, and co-founder of the International Campbell Collaboration 'I expect this text will become a gold standard resource for program evaluation in the field. Allen Rubin strikes the right balance between rigorous methods and feasible approaches. Drawing on years of experience, he demystifies program evaluation with practical examples and user-friendly language.' Jennifer L. Bellamy, Professor of Social Work and Associate Dean for Research and Faculty Development, University of Denver 'Allen Rubin has done it again! His past works have been seminal for over 40 years. Now, he consolidates his experience into this marvellous, practical volume. As the title states, this book is truly pragmatic.' Kevin Corcoran, Professor of Social Work, University of Alabama 'Bravo! Allen Rubin explains complicated concepts in ways that make them appear simpler and even enjoyable. This book, long needed for social work education, simplifies the process of practice evaluation from conceptualization of desired outcomes through dissemination of findings, without skimping on explanations of this critical content.' Joanne Yaffe, Professor of Social Work and Adjunct Professor of Psychiatry, University of Utah 'Finally, a thorough program evaluation text that covers everything! This book focuses on the practical application of program evaluation in real settings. It is well-organized and chock-full of valuable examples and tips that will help evaluators and human services professionals increase the feasibility and utility of program evaluation.' Danielle E. Parrish, Associate Professor of Social Work, Baylor University 'Program evaluation provides a foundation for evidence-informed policy and practice in the social services. Human service professionals are key to developing that evidence. This book provides students and practitioners with the knowledge, values, and skills needed for such evaluations.' Edward J Mullen, Willma and Albert Musher Professor Emeritus, Columbia University

Muu info

This textbook balances methodological rigor with the practicalities of becoming a successful evaluator in service-oriented settings.
List of Figures
xv
List of Tables
xvi
Preface xvii
Acknowledgments xxi
PART I INTRODUCTION
1(42)
1 Introduction and Overview
3(20)
1.1 Introduction
4(1)
1.2 Why Evaluate?
4(1)
1.3 Some Programs are Ineffective or Harmful
5(2)
Critical Incident Stress Debriefing
5(1)
Scared Straight Programs
6(1)
1.4 Historical Overview of Program Evaluation
7(1)
1.5 Evidence-Informed Practice
8(1)
1.6 Philosophical Issues: What Makes Some Types of Evidence Better Than Other Types?
9(3)
Contemporary Positivism
9(1)
Interpretivism
10(1)
Empowerment
10(1)
Constructivism
10(2)
1.7 Qualitative versus Quantitative Evaluations: A False Dichotomy
12(1)
1.8 Definitions
13(1)
1.9 Different Evaluation Purposes
14(1)
1.10 Types of Evaluation
15(5)
Summative Evaluation
15(1)
Formative Evaluation
16(1)
Process Evaluation
16(2)
Performance Measurement Systems
18(1)
Evaluating One's Own Practice
19(1)
Accreditation
19(1)
1.11
Chapter Main Points
20(2)
1.12 Exercises
22(1)
1.13 Additional Reading
22(1)
2 Ethical and Cultural Issues in Program Evaluation
23(20)
2.1 Introduction
24(1)
2.2 Ethical Issues
24(3)
2.3 Institutional Review Boards (IRBs)
27(5)
2.4 Culturally Sensitive Program Evaluation
32(6)
2.4.1 Recruitment
32(2)
2.4.2 Retention
34(1)
2.4.3 Data Collection
35(1)
2.4.4 Analyzing and Interpreting Evaluation Findings
36(1)
2.4.5 Measurement Equivalence
36(2)
2.5 Developing Cultural Competence
38(2)
2.5.1 Acculturation and Immigration
39(1)
2.5.2 Subgroup Differences
39(1)
2.5.3 Culturally Sensitive Data Analysis and Interpretation
40(1)
2.6
Chapter Main Points
40(1)
2.7 Exercises
41(1)
2.8 Additional Reading
42(1)
PART II QUANTITATIVE AND QUALITATIVE METHODS FOR FORMATIVE AND PROCESS EVALUATIONS
43(42)
3 Needs Assessment
45(18)
3.1 Introduction
46(3)
3.2 Defining Needs: Normative Need versus Felt Need
49(2)
3.3 Felt Need versus Service Utilization
51(1)
3.4 Needs Assessment Approaches
51(9)
3.4.1 Social Indicators
52(1)
Advantages/Disadvantages
52(1)
3.4.2 Rates under Treatment
53(1)
Advantages/Disadvantages
53(1)
3.4.3 Key Informants
53(1)
Advantages/Disadvantages
54(1)
3.4.4 Community Forums
54(1)
Advantages/Disadvantages
55(1)
3.4.5 Focus Groups
55(1)
How to Conduct a Focus Group
56(1)
Types and Sequence of Focus Group Questions
57(2)
Advantages/Disadvantages
59(1)
3.4.6 Community Surveys
59(1)
Advantages/Disadvantages
59(1)
3.5 Triangulation
60(1)
3.6
Chapter Main Points
61(1)
3.7 Exercises
62(1)
3.8 Additional Reading
62(1)
4 Survey Methods for Program Planning and Monitoring
63(22)
4.1 Introduction
64(1)
4.2 Samples, Populations, and Representativeness
64(6)
Probability Sampling
65(1)
Non-probability Samples
65(1)
4.2.1 Non-response Bias
65(2)
4.2.2 Sample Size
67(1)
Maximizing Response Rates
68(1)
Follow-ups
69(1)
4.3 Recruiting Hard-to-Reach Populations
70(1)
Tactics for Reaching and Recruiting Millennial
71(1)
4.4 Survey Modalities
71(1)
4.5 Interviews
71(3)
Be Prepared
72(1)
Professional Demeanor
72(1)
Be Punctual
73(1)
Starting the Interview
73(1)
Note Taking
73(1)
Use Neutral Probes
73(1)
4.6 Interview Guides
74(2)
4.7 Client Satisfaction Surveys
76(2)
Limitations
77(1)
4.8 Survey Questionnaire Construction
78(3)
4.8.1 Guidelines for Item Wording
78(2)
4.8.2 Guidelines for Questionnaire Format
80(1)
4.9 Online Survey Questionnaire Preparation
81(1)
4.10
Chapter Main Points
81(1)
4.11 Exercises
82(1)
4.12 Additional Reading
83(2)
PART III EVALUATING OUTCOME IN SERVICE-ORIENTED AGENCIES
85(88)
5 Selecting and Measuring Outcome Objectives
87(22)
5.1 Introduction
88(1)
5.2 Mission and Vision Statements
88(1)
5.3 Logic Models
89(2)
5.4 Stakeholder Goals
91(1)
5.5 Triangulation
92(1)
5.6 How to Write Good Program Outcome Objectives
93(1)
5.7 Operationally Defining Objectives
94(3)
5.7.1 Direct Observation
95(1)
5.7.2 Self-Report
95(1)
5.7.3 Available Records
96(1)
5.8 How to Find and Select the Best Self-Report Outcome Measures
97(1)
5.9 Criteria for Selecting a Self-Report Outcome Measure
98(7)
5.9.1 Relevance
99(1)
5.9.2 Feasibility
99(1)
5.9.3 Reliability
99(1)
5.9.4 Validity
100(1)
5.9.5 Sensitivity
101(4)
5.10
Chapter Main Points
105(2)
5.11 Exercises
107(1)
5.12 Additional Reading
108(1)
6 Inference and Logic in Pragmatic Outcome Evaluation: Don't Let the Perfect Become the Enemy of the Good
109(13)
6.1 Introduction
110(1)
6.2 Causality Criteria Revisited
111(2)
6.2.1 Correlation
111(1)
6.2.2 Time Sequence
112(1)
6.2.3 Ruling Out Alternative Explanations
112(1)
6.3 Implications of Evidence-Informed Practice and Critical Thinking
113(1)
6.4 A Caveat
114(2)
6.5 A Successful Evaluator Is a Pragmatic Evaluator
116(3)
6.6 Degree of Certainty Needed
119(1)
6.7
Chapter Main Points
119(1)
6.8 Exercises
120(1)
6.9 Additional Reading
121(1)
7 Feasible Outcome Evaluation Designs
122(15)
7.1 Introduction
123(1)
7.2 Descriptive Outcome Evaluations
123(1)
7.3 One-Group Pretest-Posttest Designs
124(1)
7.4 Effect Sizes
125(3)
7.4.1 Between-Group Effect Sizes
125(1)
7.4.2 Within-Group Effect Sizes
126(2)
7.5 Non-equivalent Comparison Groups Designs
128(1)
7.6 Selectivity Biases
129(1)
7.7 Switching Replication Design
130(1)
7.8 Switching Replication Design Compared with Waitlist Quasi-experimental Design
131(1)
7.9 Time-Series Designs
132(1)
7.10 Choosing the Most Appropriate Design
133(1)
7.11
Chapter Main Points
134(1)
7.12 Exercises
135(1)
7.13 Additional Reading
136(1)
8 Single-Case Designs for Evaluating Programs and Practice
137(16)
8.1 Introduction
138(1)
8.2 What Is a Single Case?
138(1)
8.3 Overview of Single-Case Design Logic for Making Causal Inferences
139(2)
Clinical Significance
139(2)
8.4 What to Measure and by Whom
141(2)
8.4.1 Obtrusive and Unobtrusive Observation
142(1)
8.4.2 Quantification Options
143(1)
8.5 Baselines
143(1)
Are Baselines Ethical?
143(1)
8.6 Alternative Single-Case Designs
144(4)
8.6.1 The AB Design
144(1)
8.6.2 The ABAB Design
144(2)
8.6.3 The Multiple-Baseline Design
146(1)
8.6.4 Multiple-Component Designs
147(1)
8.7 B Designs to Evaluate the Implementation of Evidence-Supported Interventions
148(1)
8.8 Using Single-Case Designs as Part of the Evidence-Informed Practice Process
149(1)
8.9 Aggregating Single-Case Design Outcomes to Evaluate an Agency
149(1)
8.10
Chapter Main Points
150(1)
8.11 Exercises
151(1)
8.12 Additional Reading
152(1)
9 Practical and Political Pitfalls in Outcome Evaluations
153(20)
9.1 Introduction
154(1)
9.2 Practical Pitfalls
154(6)
9.2.1 Intervention Fidelity
155(1)
9.2.2 Contamination of the Case Assignment Protocol
156(1)
9.2.3 Recruiting Participants
157(1)
9.2.4 Retaining Participants
158(2)
9.3 Engage Agency Staff Meaningfully in Planning the Evaluation
160(1)
9.4 Fostering Staff Compliance with the Evaluation Protocol Goes On and On
160(3)
9.5 Political Pitfalls
163(6)
9.5.1 In-House versus External Evaluators
164(5)
9.6 Conclusion
169(1)
9.7
Chapter Main Points
169(2)
9.8 Exercises
171(1)
9.9 Additional Reading
172(1)
PART IV ANALYZING AND PRESENTING DATA
173(55)
10 Analyzing and Presenting Data from Formative and Process Evaluations
175(11)
10.1 Introduction
176(1)
10.2 Quantitative and Qualitative Data Analyses: Distinctions and Compatibility
176(1)
10.3 Descriptive Statistics
177(5)
10.3.1 Frequency Distribution Tables and Charts
178(2)
10.3.2 Central Tendency
180(1)
10.3.3 Dispersion
181(1)
10.3.4 The Influence of Outliers
181(1)
10.4 Analyzing Qualitative Data
182(1)
10.4.1 Coding
182(1)
10.5
Chapter Main Points
183(2)
10.6 Exercises
185(1)
10.7 Additional Reading
185(1)
11 Analyzing Data from Outcome Evaluations
186(24)
11.1 Introduction
187(1)
11.2 Inferential Statistics
187(4)
11.2.1 P Values and Significance Levels
189(1)
11.2.2 Type II Errors
190(1)
11.3 Mistakes to Avoid When Interpreting Inferential Statistics
191(3)
11.3.1 Overreliance on Statistical Significance
191(1)
11.3.2 Disregarding Sample Size (Statistical Power)
191(2)
11.3.3 Disregarding Effect Sizes
193(1)
11.4 Calculating and Interpreting Effect Sizes
194(7)
11.4.1 Within-Group Effect Sizes
195(3)
11.4.2 Between-Group Effect Sizes
198(1)
11.4.3 Why Divide by the Standard Deviation?
198(1)
11.4.4 A Caution
199(1)
11.4.5 Odds Ratios and Risk Ratios
200(1)
Odds Ratios
200(1)
Risk Ratios
200(1)
11.5 Overlooking Substantive (Practical) Significance
201(1)
11.6 Cost-Effectiveness and Cost-Benefit Analyses: Evaluating Efficiency
202(3)
11.7 Qualitative Data Analysis
205(1)
11.8
Chapter Main Points
206(2)
11.9 Exercises
208(1)
11.10 Additional Reading
209(1)
12 Writing and Disseminating Evaluation Reports
210(18)
12.1 Introduction
211(1)
12.2 Tailor to Your Audience
211(1)
12.3 Writing Style and Format
212(1)
12.4 Involve Key Stakeholders
212(1)
12.5 Ethical Issues
213(1)
12.6 Report Components
213(10)
12.6.1 Executive Summary
214(2)
12.6.2 Introduction and Literature Review
216(1)
12.6.3 Methodology
216(1)
12.6.4 Results (Findings)
217(1)
Infographics
217(2)
12.6.5 Discussion
219(2)
Discussing Negative Findings
221(1)
What If Parts of the Evaluation Could Not Be Completed?
221(1)
12.6.6 References
222(1)
12.6.7 Appendices
223(1)
12.7 Summary of Mistakes to Avoid
223(1)
12.8 Dissemination
224(1)
12.9
Chapter Main Points
225(1)
12.10 Exercises
226(1)
12.11 Additional Reading
227(1)
Epilogue: More Tips for Becoming a Successful Evaluator 228(1)
Planning the Evaluation 228(1)
Levels of Stakeholder Participation 229(1)
Obtain Feedback to a Written Draft of the Evaluation Protocol 229(1)
During Implementation of the Evaluation 229(1)
At the Conclusion of the Evaluation 230(1)
People Skills 231(1)
Show Genuine Interest in Others 231(1)
Try to Be Humorous 231(1)
Be Self-Assured 232(1)
Show Genuine Empathy 232(1)
Active Listening 232(2)
References 234(5)
Index 239
Allen Rubin has been teaching courses on program evaluation for over 40 years. He is the Kantambu Latting College Professor of Leadership and Change at the University of Houston's Graduate College of Social Work, past president of the Society for Social Work and Research, and a fellow in the American Academy of Social Work and Social Welfare.