Muutke küpsiste eelistusi

E-raamat: RealWorld Evaluation: Working Under Budget, Time, Data, and Political Constraints

(Washington State University, WA), (Independent Consultant)
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 31-Jul-2019
  • Kirjastus: SAGE Publications Inc
  • Keel: eng
  • ISBN-13: 9781544318776
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 85,22 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 31-Jul-2019
  • Kirjastus: SAGE Publications Inc
  • Keel: eng
  • ISBN-13: 9781544318776
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

RealWorld Evaluation, Third Edition by Michael Bamberger, Linda Mabry, and Malia Kahn addresses the challenges of conducting program evaluations in real-world contexts where evaluators and their clients face budget and time constraints and where critical data may be missing. RealWorld Evaluation is organized around a seven-step model developed by the authors, which has been tested and refined in workshops and in practice. Vignettes and case studies - representing evaluations from a variety of geographic regions and sectors - demonstrate adaptive possibilities for small projects with budgets of a few thousand dollars to large-scale, long-term evaluations of complex programs. The text incorporates quantitative, qualitative, and mixed-method designs, and this Third Edition reflects important developments in the field since the publication of the Second Edition, including new chapters on gender and feminist evaluation, and on development evaluation in the age of big data.

Arvustused

"This book moves the study of evaluation from the theoretical to the practical, so that evaluators can improve their work. It deals with most of the real issues that evaluators face, particularly at the international level." -- John Mathiason "This is one of the most practical textbooks in the field of evaluation that I have encountered. Its recognition of the limitations that affect program evaluation provides students with a realistic understanding of the difficulties in conducting evaluations and how to overcome these difficulties." -- David C. Powell "RealWorld Evaluation moves forward from where other evaluation textbooks stop. RWE challenges the evaluator to ask the difficult questions that can impact the design, implementation, and utilization of the evaluation. RWE then leads the reader through how to find efficient solutions to minimize these constraints." -- Karen McDonnell "RealWorld Evaluation is a must-read for students of program evaluation-the framework and emphasis on practical constraints makes it an invaluable tool for learning the art and science of public policy. -- Amanda Olejarski "This is an invaluable resource for both novice and experienced evaluators. It contains a variety of tools and recommendations to successfully design and implement effective evaluations for any size and type of program." -- Sebastian Galindo "Any research class focusing on real-world evaluation should start with this text; it is comprehensive, well-organized, well-written, and thoroughly practical." -- Jeffrey S. Savage

List of Boxes, Figures, and Tables xxiv
List of Appendices xxxi
Foreword xxxiii
Jim Rugh
Preface xxxv
Acknowledgments xlii
About the Authors xliv
Part I The Seven Steps Of The RealWorld Evaluation Approach
Chapter 1 Overview: RealWorld Evaluation and the Contexts in Which It Is Used
2(17)
1 Welcome to RealWorld Evaluation
2(6)
2 The RealWorld Evaluation Context
8(1)
3 The Four Types of Constraints Addressed by the RealWorld Approach
9(4)
3.1 Budget and Other Resource Constraints
9(2)
3.2 Time Constraints
11(1)
3.3 Data Constraints
11(1)
3.4 Political and Organizational Influences and Constraints
12(1)
4 Additional Organizational and Administrative Challenges
13(1)
5 The RealWorld Approach to Evaluation Challenges
14(2)
6 Who Uses RealWorld Evaluation, for What Purposes, and When?
16(2)
Summary
18(1)
Further Reading
18(1)
Chapter 2 First Clarify the Purpose: Scoping the Evaluation
19(20)
1 Stakeholder Expectations of Impact Evaluations
20(2)
2 Understanding Information Needs
22(3)
3 Developing the Program Theory Model
25(5)
3.1 Theory-Based Evaluation (TBE) as a Management Tool
30(1)
4 Identifying the Constraints to Be Addressed by RWE and Determining the Appropriate Evaluation Design
30(1)
5 Developing Designs Suitable for RealWorld Evaluation Conditions
31(6)
5.1 How the Availability of Data Affects the Choice of Evaluation Design
32(4)
5.2 Developing the Terms of Reference (Statement of Work) for the Evaluation
36(1)
Summary
37(1)
Further Reading
37(2)
Chapter 3 Not Enough Money: Addressing Budget Constraints
39(16)
1 Simplifying the Evaluation Design
39(7)
1.1 Simplifying the Design for Quantitative Evaluations
42(2)
1.2 Simplifying the Design for Qualitative Evaluations
44(2)
2 Clarifying Client Information Needs
46(1)
3 Using Existing Data
47(1)
4 Reducing Costs by Reducing Sample Size
47(3)
4.1 Adjusting the Sample Size Based on Client Information Needs and the Kinds of Decisions to Which the Evaluation Will Contribute
47(1)
4.2 Factors Affecting Sample Size for Quantitative Evaluations
48(2)
Effect of the Level of Disaggregation on the Required Sample Size
49(1)
4.3 Factors Affecting the Size of Qualitative Samples
50(1)
4.4 Factors Affecting the Size of Mixed-Method Samples
50(1)
5 Reducing Costs of Data Collection and Analysis
50(2)
6 Assessing the Feasibility and Utility of Using New Information Technology (NIT) to Reduce the Costs of Data Collection
52(1)
7 Threats to Validity of Budget Constraints
53(1)
Summary
53(1)
Further Reading
54(1)
Chapter 4 Not Enough Time: Addressing Scheduling and Other Time Constraints
55(17)
1 Similarities and Differences Between Time and Budget Constraints
55(4)
2 Simplifying the Evaluation Design
59(1)
3 Clarifying Client Information Needs and Deadlines
60(1)
4 Using Existing Documentary Data
61(1)
5 Reducing Sample Size
61(1)
6 Rapid Data-Collection Methods
61(6)
7 Reducing Time Pressure on Outside Consultants
67(1)
8 Hiring More Resource People
68(1)
9 Building Outcome Indicators Into Project Records
68(1)
10 New Information Technology for Data Collection and Analysis
69(1)
11 Common Threats to Adequacy and Validity Relating to Time Constraints
70(1)
Summary
70(1)
Further Reading
71(1)
Chapter 5 Critical Information Is Missing or Difficult to Collect: Addressing Data Constraints
72(22)
1 Data Issues Facing RealWorld Evaluators
72(4)
2 Reconstructing Baseline Data
76(9)
2.1 Strategies for Reconstructing Baseline Data
76(9)
When Should Baseline Data Be Collected?
76(1)
Using Administrative Data From the Project
77(2)
Cautionary Tales-and Healthy Skepticism
79(1)
Using Other Sources of Secondary Data
80(1)
Conducting Retrospective Surveys
81(2)
Working With Key Informants
83(1)
Using Participatory Evaluation Methods
83(1)
Using Geographical Information Systems (GIST and Satellite Images to Reconstruct Baseline Data
84(1)
3 Special Issues Reconstructing Baseline Data for Project Populations and Comparison Groups
85(3)
3.1 Challenges Collecting Baseline Data on the Project Group
85(1)
3.2 Challenges Collecting Baseline Data on a Comparison Group
86(1)
3.3 The Challenge of Omitted Variables ("Unobservables")
86(2)
4 Collecting Data on Sensitive Topics or From Difficult-to-Reach Groups
88(4)
4.1 Addressing Sensitive Topics
88(1)
4.2 Studying Difficult-to-Reach Groups
88(4)
5 Common Threats to Adequacy and Validity of an Evaluation Relating to Data Constraints
92(1)
Summary
92(1)
Further Reading
93(1)
Chapter 6 Political Constraints
94(11)
1 Values, Ethics, and Politics
94(1)
2 Societal Politics and Evaluation
95(2)
3 Stakeholder Politics
97(1)
4 Professional Politics
98(1)
4.1 Teaming
98(1)
5 Political Issues in the Design Phase
99(1)
5.1 Hidden Agendas and Pseudo-Evaluation
99(1)
5.2 Stakeholder Differences
100(1)
6 Political Issues in the Conduct of an Evaluation
100(1)
6.1 Shifting Evaluator Roles
100(1)
6.2 Data Access
101(1)
7 Political Issues in Evaluation Reporting and Use
101(1)
7.1 Bias in Reporting
102(1)
7.2 Use and Misuse of Findings
102(1)
8 Advocacy
102(2)
8.1 Activism
103(1)
8.2 Strategizing for Use
103(1)
Summary
104(1)
Further Reading
104(1)
Chapter 7 Strengthening the Evaluation Design and the Validity of the Conclusions
105(22)
1 Validity in Evaluation
105(2)
2 Factors Affecting Adequacy and Validity
107(1)
3 A Framework for Assessing the Validity and Adequacy of QUANT, QUAL, and Mixed-Method Designs
108(4)
3.1 The Categories of Validity (Adequacy, Trustworthiness)
109(3)
4 Assessing and Addressing Threats to Validity for Quantitative Impact Evaluations
112(6)
4.1 A Threats-to-Validity Worksheet for QUANT Evaluations
112(2)
4.2 Strengthening Validity in Quantitative Evaluations by Strengthening the Evaluation Design
114(2)
Random Sampling
114(1)
Triangulation
115(1)
Selection of Statistical Procedures
116(1)
Peer Review and Meta-Evaluation
116(1)
4.3 Taking Corrective Actions When Threats to Validity Have Been Identified
116(2)
5 Assessing Adequacy and Validity for Qualitative Impact Evaluations
118(4)
5.1 Strengthening Validity in Qualitative Evaluations by Strengthening the Evaluation Design
120(1)
Purposeful (Purposive) Sampling
120(1)
Triangulation
120(1)
Validation
121(1)
Meta-Evaluation and Peer Review
121(1)
5.2 Addressing Threats to Validity in Qualitative Evaluations
121(2)
Collecting Data Across the Full Range of Appropriate Settings, Times, and Respondents
121(1)
Inappropriate Subject Selection
122(1)
Insufficient Language or Cultural Skills to Ensure Sensitivity to Informants
122(1)
Insufficient Opportunity for Ongoing Analysis by the Team
122(1)
Minimizing Observer Effects
122(1)
6 Assessing Validity for Mixed-Method (MM) Evaluations
122(1)
7 Using the Threats-to-Validity Worksheets
123(2)
7.1 Points During the RWE Cycle at Which Corrective Measures Can Be Taken
124(6)
Strengthening the Evaluation Design
125(1)
Summary
125(1)
Further Reading
126(1)
Chapter 8 Making It Useful: Helping Clients and Other Stakeholders Utilize the Evaluation
127(17)
1 What Do We Mean by Influential Evaluations and Useful Evaluations?
127(3)
2 The Underutilization of Evaluation Studies
130(2)
2.1 Why Are Evaluation Findings Underutilized?
131(1)
2.2 The Challenges of Utilization for RWE
131(1)
3 Strategies for Promoting the Utilization of Evaluation Findings and Recommendations
132(9)
3.1 The Importance of the Scoping Phase
133(3)
Understand the Client's Information Needs
133(2)
Understand the Dynamics of the Decision-Making Process and the Timing of the Different Steps
135(1)
Define the Program Theory on Which the Program Is Based in Close Collaboration With Key Stakeholders
135(1)
Identify Budget, Time, and Data Constraints and Prioritize Their Importance and the Client's Flexibility to Adjust Budget or Time If Required to Improve the Quality of the Evaluation
136(1)
Understand the Political Context
136(1)
Prepare a Set of RWE Design Options to Address the Constraints and to Strategize With the Client to Assess Which Option Is Most Acceptable
136(1)
3.2 Formative Evaluation Strategies
136(2)
3.3 Communication With Clients Throughout the Evaluation
138(1)
3.4 Evaluation Capacity Building
138(1)
3.5 Strategies for Overcoming Political and Bureaucratic Challenges
138(1)
3.6 Communicating Findings
139(2)
3.7 Developing a Follow-Up Action Plan
141(1)
Summary
141(1)
Further Reading
141(3)
Part II A Review Of Evaluation Methods And Approaches And Their Application In RealWorld Evaluation: For Those Who Would Like To Dig Deeper
Chapter 9 Standards and Ethics
144(9)
1 Standards of Competence
144(1)
2 Professional Standards
144(2)
2.1 Evaluation Standards
145(1)
2.2 Guiding Principles
146(1)
3 Ethical Codes of Conduct
146(3)
3.1 International Standards of Ethics
147(1)
3.2 National Standards of Ethics
147(1)
3.3 Ethical Frameworks in Evaluation
148(1)
4 Issues
149(2)
4.1 Advocacy
149(1)
4.2 Accountability
149(1)
4.3 International Regulation
150(1)
Summary
151(1)
Further Reading
151(2)
Chapter 10 Theory-Based Evaluation and Theory of Change
153(34)
1 Theory-Based Evaluation (TBE) and Theory of Change (TOC)
153(7)
1.1 Program Theory and Theory-Based Evaluation
153(3)
Stage 1: Articulation of the Program Theory Model
154(1)
Stage 2: The Results Framework
154(1)
Stage 3: The Logical Framework
154(2)
Stages 4a and 4b: Impact and Implementation Models
156(1)
Applications of Program Theory
156(1)
1.2 Theory of Change
156(4)
Benefits of the TOC for Program Evaluation
158(2)
2 Applications of Program Theory in Program Evaluation
160(5)
2.1 The Increasing Use of Program Theory in Evaluation
160(2)
2.2 Examples of TBE Tools and Techniques That Can Be Applied in Policy, Program, and Project Evaluations and at Different Stages of the Program Cycle
162(2)
2.3 Realist Evaluation
164(1)
3 Using TOC in Program Evaluation
165(3)
3.1 Conceptualizing the Change Process
165(1)
3.2 Identifying and Testing Hypotheses About the Processes of Implementation and Change
165(1)
3.3 Identifying Unintended Outcomes
165(1)
3.4 Addressing Complexity and Emergence
166(1)
Addressing Emergence
167(1)
3.5 Challenges Affecting the Use of TOCs
167(1)
4 Designing a Theory of Change Evaluation Framework
168(7)
4.1 The Different Ways That a Theory of Change Can Be Used
168(1)
4.2 The Purpose of the Theory of Change
169(1)
4.3 Representing the Theory of Change
170(2)
An Example of a Theory of Change
172(1)
4.4 Designing the TOC and the Sources of Data
172(3)
5 Integrating a Theory of Change Into the Program Management, Monitoring, and Evaluation Cycle
175(4)
5.1 Articulating the Program Theory
175(2)
5.2 Process Tracing
177(1)
5.3 Results-Based Reporting and Logical Frameworks
177(2)
5.4 Program Impact and Implementation Models
179(1)
6 Program Theory Evaluation and Causality
179(5)
6.1 The Debate on the Value of Program Theory to Explain Causality
179(2)
6.2 Using Mixed-Method Program Theory Designs to Explain Causality
181(3)
Summary
184(1)
Further Reading
184(3)
Chapter 11 Evaluation Designs: The RWE Strategy for Selecting the Appropriate Evaluation Design to Respond to the Purpose and Context of Each Evaluation
187(31)
1 Different Approaches to the Classification of Evaluation Designs
187(3)
2 Assessing Causality Attribution and Contribution
190(2)
2.1 Attribution Analysis and Contribution Analysis
190(1)
2.2 An Introduction to Contribution Analysis and Outcome Harvesting
190(2)
3 The RWE Approach to the Selection of the Appropriate Impact Evaluation Design
192(16)
3.1 Design Step 1
193(4)
Using the Evaluation Purpose and Context Checklist
193(4)
3.2 Design Step 2
197(4)
Analysis of the Evaluation Design Framework
197(4)
3.3 Design Step 3
201(3)
Identify a Short List of Potential Evaluation Designs
201(3)
3.4 Design Step 4
204(1)
Take Into Consideration the Preferred Methodological Approaches of Stakeholders and Evaluators
204(1)
3.5 Design Step 5
205(1)
Strategies for Strengthening the Basic Evaluation Designs
205(1)
3.6 Design Step 6
205(1)
Evaluability Analysis to Assess the Technical, Resource, and Political Feasibility of Each Design
205(1)
3.7 Design Step 7
205(2)
Preparation of Short List of Evaluation Design Options for Discussion With Clients and Other Stakeholders
205(2)
3.8 Design Step 8
207(1)
Agreement on the Final Evaluation Design
207(1)
4 Tools and Techniques for Strengthening the Basic Evaluation Designs
208(2)
4.1 Basing the Evaluation on a Theory of Change and a Program Theory Model
208(1)
4.2 Process Analysis
209(1)
4.3 Incorporating Contextual Analysis
209(1)
4.4 Complementing Quantitative Data Collection and Analysis With Mixed-Method Designs
210(1)
4.5 Making Full Use of Available Secondary Data
210(1)
4.6 Triangulation: Using Two or More Independent Estimates for Key Indicators and Using Data Sources and Analytical Methods to Explain Findings
210(1)
5 Selecting the Best Design for RealWorld Evaluation Scenarios
210(5)
5.1 Factors Affecting the Choice of the Appropriate Design for a Particular Evaluation
210(3)
5.2 Challenges Facing the Use of Experimental and Other Statistical Designs in RealWorld Evaluation Contexts
213(1)
5.3 Selecting the Appropriate Designs for RealWorld Evaluation Scenarios
213(2)
Summary
215(1)
Further Reading
216(2)
Chapter 12 Quantitative Evaluation Methods
218(25)
1 Quantitative Evaluation Methodologies
218(1)
1.1 The Importance of Program Theory in the Design and Analysis of QUANT Evaluations
218(1)
1.2 Quantitative Sampling
219(1)
2 Experimental and Quasi-Experimental Designs
219(7)
2.1 Randomized Control Trials (RCTs)
219(4)
2.2 Quasi-Experimental Designs (QEDs)
223(3)
3 Strengths and Weaknesses of Quantitative Evaluation Methodologies
226(1)
4 Applications of Quantitative Methodologies in Program Evaluation
226(3)
4.1 Analysis of Population Characteristics
226(1)
4.2 Hypothesis Testing and the Analysis of Causality
226(2)
4.3 Cost-Benefit Analysis and the Economic Rate of Return (ERR)
228(1)
4.4 Cost-Effectiveness Analysis
228(1)
5 Quantitative Methods for Data Collection
229(6)
5.1 Questionnaires
229(1)
Types of Questions Used in Quantitative Surveys
230(1)
5.2 Interviewing
230(1)
5.3 Observation
230(1)
Observational Protocols
230(1)
Unobtrusive Measures in Observation
231(1)
5.4 Focus Groups
231(1)
5.5 Self-Reporting Methods
232(1)
5.6 Knowledge and Achievement Tests
232(1)
5.7 Anthropometric and Other Physiological Health Status Measures
232(1)
5.8 Using Secondary Data
233(2)
Common Problems With Secondary Data for Evaluation Purposes
234(1)
6 The Management of Data Collection for Quantitative Studies
235(4)
6.1 Survey Planning and Design
235(1)
6.2 Implementation and Management of Data Collection
236(7)
Real World Constraints on the Management of Data Collection
237(2)
7 Data Analysis
239(1)
Summary
240(1)
Further Reading
240(3)
Chapter 13 Qualitative Evaluation Methods
243(19)
1 Design
243(2)
1.1 Flexibility
244(1)
1.2 Contextuality
245(1)
1.3 Sensitivity
245(1)
2 Data Collection
245(11)
2.1 Observation
246(4)
Collecting Observation Data
246(1)
Structure in Observations
246(1)
Participant Observations
247(3)
Writing Up Observation Data
250(1)
2.2 Interview
250(3)
Collecting Interview Data
250(1)
Structure in Interviews
251(1)
Group Interviews
251(1)
Hybrid Interview Methods
252(1)
Writing Up Interview Data
253(1)
2.3 Analysis of Documents and Artifacts
253(1)
2.4 Technology in Data Collection
254(1)
2.5 Triangulation and Validation
254(2)
Triangulation
255(1)
Validation
255(1)
3 Data Analysis
256(3)
3.1 Inductive Analysis
256(1)
Constant-Comparative Method
256(1)
3.2 Thematic Analysis
256(1)
Electronic Data Analysis
256(1)
3.3 Holistic Analysis
257(1)
Data Complexity
257(1)
Criteriality
257(1)
3.4 Validity
257(2)
Disciplining Subjective Judgment
258(1)
Theoretical Triangulation
258(1)
Peer Review and Meta-Evaluation
258(1)
3.5 Generalizability
259(1)
4 Reporting
259(1)
5 Real-World Constraints
260(1)
Summary
260(1)
Further Reading
261(1)
Chapter 14 Mixed-Method Evaluation
262(27)
1 The Mixed-Method Approach
262(1)
2 Rationale for Mixed-Method Approaches
263(6)
2.1 Why Use Mixed Methods?
263(4)
2.2 Areas Where Mixed Methods Can Potentially Strengthen Evaluations
267(2)
Improving Construct Validity and Data Quality
267(1)
Evaluating Complex Programs
268(1)
Strengthening Big Data-Based Evaluations
268(1)
Identifying Unintended Outcomes (UOs) of Development Programs
269(1)
3 Approaches to the Use of Mixed Methods
269(7)
3.1 Applying Mixed Methods When the Dominant Design Is Quantitative or Qualitative
273(2)
3.2 Using Mixed Methods When Working Under Budget, Time, and Data Constraints
275(1)
4 Mixed-Method Strategies
276(5)
4.1 Sequential Mixed-Method Designs
276(2)
4.2 Concurrent Designs
278(3)
Concurrent Triangulation Design
279(1)
Concurrent Nested Design
280(1)
4.3 Using Mixed Methods at Different Stages of the Evaluation
281(1)
5 Implementing a Mixed-Method Design
281(3)
Composition of the Research Team
283(1)
Using Integrated Approaches at Different Stages of the Evaluation
283(1)
6 Using Mixed Methods to Tell a More Compelling Story of What a Program Has Achieved
284(2)
7 Case Studies Illustrating the Use of Mixed Methods
286(1)
Summary
286(1)
Further Reading
287(2)
Chapter 15 Sampling Strategies for RealWorld Evaluation
289(26)
1 The Importance of Sampling for RealWorld Evaluation
290(1)
2 Purposive Sampling
291(5)
2.1 Purposive Sampling Strategies
292(2)
2.2 Purposive Sampling for Different Types of Qualitative Data Collection and Use
294(1)
Sampling for Data Collection
294(1)
Sampling Data for Qualitative Reporting
295(1)
2.3 Considerations in Planning Purposive Sampling
295(1)
3 Probability (Random) Sampling
296(9)
3.1 Key Questions in Designing a Random Sample for Program Evaluation
296(3)
3.2 Selection Procedures in Probability (Random) Sampling
299(1)
3.3 Sample Design Decisions at Different Stages of the Survey
300(3)
Presampling Questions
300(1)
Questions and Choices During the Sample Design Process
301(1)
Postsampling Questions and Choices
302(1)
3.4 Sources of Error in Probabilistic Sample Design
303(2)
Nonsampling Bias
303(1)
Sampling Bias
304(1)
4 Using Power Analysis and Effect Size for Estimating the Appropriate Sample Size for an Impact Evaluation
305(4)
4.1 The Importance of Power Analysis for Determining Sample Size for Probability Sampling
305(1)
4.2 Defining Effect Size
306(1)
4.3 Type I and Type II Errors
306(1)
4.4 Defining the Power of the Test
307(1)
4.5 An Example Illustrating the Relationship Between the Power of the Test, the Effect Size, and the Required Sample Size
307(2)
5 The Contribution of Meta-Analysis
309(1)
6 Sampling Issues for Mixed-Method Evaluations
309(3)
6.1 Model 1: Using Mixed Methods to Strengthen a Mainly Quantitative Evaluation Design
309(2)
6.2 Model 2: Using a Mixed-Method Design to Strengthen a Qualitative Evaluation Design
311(1)
6.3 Model 3: Using an Integrated Mixed-Method Design
311(1)
7 Sampling Issues for RealWorld Evaluation
312(1)
7.1 Lot Quality Acceptance Sampling ILQASI: An Example of a Sampling Strategy Designed to Be Economical and Simple to Administer and Interpret
312(1)
Summary
313(1)
Further Reading
314(1)
Chapter 16 Evaluating Complex Projects, Programs, and Policies
315(38)
1 The Move Toward Complex, Country-Level Development Programming
315(2)
2 Defining Complexity in Development Programs and Evaluations
317(14)
2.1 The Dimensions of Complexity
317(4)
Dimension 1: The Nature of the Intervention
317(2)
Dimension 2: Institutions and Stakeholders
319(1)
Dimension 3: The Context (System) Within Which the Program Is Implemented
319(1)
Dimension 4: Causality and Change
320(1)
The Complexity of the Evaluation
320(1)
2.2 Simple Projects, Complicated Programs, and Complex Development Interventions
321(5)
2.3 The Main Types of Complex Interventions
326(1)
2.4 Assessing Levels of Complexity
327(3)
How to Use the Checklist
330(1)
2.5 Special Challenges for the Evaluation of Complex, Country-Level Programs
330(1)
3 A Framework for the Evaluation of Complex Development Programs
331(20)
3.1 Overview of the Complexity-Responsive Evaluation Framework
331(2)
3.2 Drawing on Big Data Science
333(1)
3.3 Step 1: Mapping the Dimensions of Complexity
333(5)
3.4 Step 2: Choosing a Unit of Analysis for Unpacking Complex Interventions Into Evaluable Components
338(2)
Option 1: Implementation Components, Phases, and Themes
338(1)
Option 2: Program Theories
339(1)
Option 3: Cases
339(1)
Option 4: Variables
340(1)
3.5 Step 3: Choosing an Evaluation Design for Evaluating Each Unpacked Component
340(5)
Attribution and the Challenge of Defining the Counterfactual for Complex Evaluations
340(1)
Choosing the Best Evaluation Design for Evaluating the Unpacked Program Components
340(2)
Operational Unpacking of Components of Complex Interventions
342(1)
Program Theory and Theory Reconstruction
343(1)
Case Studies and Rich Description
344(1)
Variable-Based Approaches to Unpacking
344(1)
3.6 Step 4: Choosing an Approach to Reassembling the Various Parts Into a Big Picture
345(5)
Systems Modeling
345(1)
Descriptive and Inferential Statistical Analysis
346(1)
Comparative Case Study Approaches
346(1)
Portfolio Analysis
347(1)
Review and Synthesis Approaches
347(1)
Rating Scales
348(2)
3.7 Step 5: Assessing Program Contribution to National and International Development Goals-Going Back to the Big Picture
350(1)
Summary
351(1)
Further Reading
352(1)
Chapter 17 Gender Evaluation: Integrating Gender Analysis Into Evaluations
353(29)
1 Why a Gender Focus Is Critical
354(3)
1.1 Why So Few Evaluations Have a Gender Focus
355(1)
Political Constraints
355(1)
Skill and Resource Constraints
355(1)
Methodological Constraints
356(1)
1.2 The Value of a Gender Evaluation
356(1)
2 Gender Issues in Evaluations
357(9)
Gender Does Not Just Focus on Women
357(1)
Intersectionality
357(1)
Multisectoral Analysis
357(1)
Complexity and a Longitudinal Perspective
358(1)
Gender Approaches Are Normative
358(1)
Bodily Integrity and Sexuality
359(1)
Systems of Social Control and Gender
359(1)
Defining Boundaries for the Evaluation
360(1)
2.1 How to "Gender" an Evaluation
361(2)
Using a Gender Analysis Framework
361(2)
Using a Feminist Evaluation Approach
363(1)
2.2 A Continuum of Gendered Evaluation
363(3)
Level 1: Sex Disaggregation of a Set of Basic Indicators
364(1)
Level 2: Analysis of Factors Affecting Women's Participation in Development
365(1)
Level 3: Analysis of Household, Community, and Social Dynamics
365(1)
Level 4: Comprehensive Feminist Analysis for Empowerment
366(1)
3 Designing a Gender Evaluation
366(7)
3.1 Design Criteria
366(1)
Evaluation Criteria-Adapting the OECD/DAC Framework
366(1)
3.2 Selecting Projects for a Gendered Evaluation
367(2)
Institutional Intervention Points
368(1)
Portfolio Analysis and Meta-Analysis
368(1)
Gender Flags and Checklists
368(1)
3.3 Defining the Depth and Scope
369(2)
Depth of the Analysis
369(1)
Boundaries of the Evaluation
369(1)
Time Horizons
370(1)
3.4 Defining Evaluation Questions
371(1)
The Importance of Broad-Based Stakeholder Consultations
372(1)
3.5 Evaluability Assessment
372(1)
4 Gender Evaluations With Different Scopes
373(3)
4.1 Complex Development Programs Evaluation
373(1)
4.2 Single-Country Evaluations
373(1)
4.3 Multicountry Evaluations
374(1)
4.4 Country Gender M&E Framework
374(1)
4.5 Gender in Sustainable Development Goals
374(2)
5 The Tools of Gender Evaluation
376(2)
5.1 Advanced Designs for Gender Evaluations
377(1)
5.2 Attribution and Contribution Analysis
378(1)
Summary
378(1)
Further Reading
379(3)
Chapter 18 Evaluation in the Age of Big Data
382(48)
1 Introducing Big Data and Data Science
383(13)
1.1 Increasing Application of Big Data in Personal and Public Life
383(1)
1.2 Defining Big Data, Data Science, and New Information Technology (NIT)
384(8)
Big Data Is Embedded in New Information Technology
384(3)
Defining Big Data
387(5)
1.3 The Data Continuum
392(1)
1.4 The Position of Evaluation in the Big Data Ecosystem
393(5)
The Big Data Ecosystem
393(1)
The Evaluation Ecosystem
394(2)
2 Increasing Application of Big Data in the Development Context
396(2)
3 The Tools of Data Science
398(13)
3.1 The Stages of the Data Analytics Cycle
398(6)
Step 1: Descriptive and Exploratory Analysis: Documenting What Is Happening, Often in Real Time
399(2)
Step 2: Predictive Analysis: What Is Likely to Happen
401(2)
Step 3: Detection: Tracking Who Is Likely to Succeed and Who Will Fail
403(1)
Step 4: Prescription: Evaluating How Outcomes Were Achieved and Providing Recommendations on How to Improve Program Performance
403(1)
3.2 The Tools of Data Analytics
404(4)
3.3 The Limitations of Data Science
408(3)
Being Aware of, and Addressing, Methodological Challenges
408(1)
Political and Organizational Challenges
408(1)
Privacy and Security
409(1)
New Ethical Challenges
409(1)
The Dark Side of Big Data
410(1)
4 Potential Applications of Data Science in Development Evaluation
411(6)
4.1 Challenges Facing Current Evaluation Approaches
411(2)
Data Analysis Challenges
412(1)
4.2 How Data Science Can Strengthen Evaluation Practice
413(4)
5 Building Bridges Between Data Science and Evaluation
417(8)
5.1 The Potential Benefits of Convergence of Data Science and Evaluation
417(2)
5.2 The Need for Caution When Assessing the Benefits of Big Data and Data Analytics for Development Evaluation
419(1)
5.3 Challenges
420(2)
5.4 Skills Required for Present and Future Evaluators
422(2)
Understanding the Big Debates Around the Evaluation of Future Development Programs
422(1)
New Skills Required for Evaluation Offices and Evaluators and for Data Scientists
423(1)
Skills Development for Evaluators
423(1)
Evaluation Skills for Data Scientists
424(1)
5.5 Necessary Conditions for Convergence to Occur
424(1)
To What Extent Do These Conditions Exist in Both Industrial and Developing Countries?
425(1)
5.6 Possible Collaborative Bridge-Building Initiatives
425(1)
Summary
425(1)
Further Reading
426(4)
Part III Managing Evaluations
Chapter 19 Managing Evaluations
430(29)
1 Organizational and Political Issues Affecting the Design, Implementation, and Use of Evaluations
430(1)
2 Planning and Managing the Evaluation
431(20)
2.1 Step 1: Preparing the Evaluation
431(5)
Step 1-A: Defining the Evaluation Framework or the Scope of Work (SoW)
431(2)
Step 1-B: Linking the Evaluation to the Project Results Framework
433(1)
Step 1-C: Involving Stakeholders
433(1)
Step 1-D: Commissioning Diagnostic Studies
433(2)
Step 1-E: Defining the Management Structure for the Evaluation
435(1)
2.2 Step 2: Recruiting the Evaluators
436(3)
Step 2-A: Recruiting the Internal Evaluation Team
436(1)
Step 2-B: Different Ways to Contract External Evaluation Consultants
436(1)
Step 2-C: Preparing the Request for Proposals (RFP)
437(1)
Step 2-D: Preparing the Terms of Reference (ToR)
438(1)
Step 2-E: Selecting the Consultants
439(1)
2.3 Step 3: Designing the Evaluation
439(4)
Step 3-A: Formulating Evaluation Questions
439(2)
Step 3-B: Assessing the Evaluation Scenario
441(1)
Step 3-C: Selecting the Appropriate Evaluation Design
441(1)
Step 3-D: Commissioning an Evaluability Assessment
441(1)
Step 3-E: Designing "Evaluation-Ready" Programs
442(1)
2.4 Step 4: Implementing the Evaluation
443(6)
Step 4-A: The Role of the Evaluation Department of a Funding Agency, Central Government Ministry, or Sector Agency
443(1)
Step 4-8: The Inception Report
444(1)
Step 4-C: Managing the Evaluation
444(1)
Step 4-D: Working With Stakeholders
445(1)
Step 4-E: Quality Assurance [ QA]
446(1)
Step 4-F: Management Challenges With Different Kinds of Evaluation
447(2)
2.5 Step 5: Reporting and Dissemination
449(1)
Step 5-A: Providing Feedback on the Draft Report
449(1)
Step 5-B: Disseminating the Evaluation Report
450(1)
2.6 Step 6: Ensuring Implementation of the Recommendations
450(1)
Step 6-A: Coordinating the Management Response and Follow-Up
450(1)
Step 6-B: Facilitating Dialogue With Partners
451(1)
3 Institutionalizing Impact Evaluation Systems at the Country and Sector Levels
451(3)
3.1 Institutionalizing Impact Evaluation
451(2)
3.2 Integrating IE Into Sector and/or National M&E and Other Data-Collection Systems
453(6)
Creating Demand for IE
454(1)
4 Evaluating Capacity Development
454(1)
Summary
455(2)
Further Reading
457(2)
Chapter 20 The Road Ahead
459(15)
1 Conclusions
459(11)
1.1 The Challenge of Assessing Impacts in a World in Which Many Evaluations Have a Short-Term Focus
459(1)
1.2 The Continuing Debate on the "Best" Evaluation Methodologies
459(2)
1.3 Selecting the Appropriate Evaluation Design
461(2)
1.4 Mixed Methods: The Approach of Choice for Most RealWorld Evaluations
463(1)
1.5 How Does RealWorld Evaluation Fit Into the Picture?
463(1)
1.6 Quality Assurance
463(1)
1.7 Need for a Strong Focus on Gender Equality and Social Equity
464(1)
1.8 Basing the Evaluation Design on a Program Theory Model
465(1)
1.9 The Importance of Context
465(1)
1.10 The Importance of Process
466(1)
1.11 Dealing With Complexity in Development Evaluation
466(1)
1.12 Emergence
467(1)
1.13 Integrating the New Information Technologies Into Evaluation
468(1)
1.14 Greater Attention Must Be Given to the Management of Evaluations
468(1)
1.15 The Challenge of Institutionalization of Evaluation
469(1)
1.16 The Importance of Competent Professional and Ethical Practice
470(1)
2 Recommendations
470(4)
2.1 Developing Standardized Methodologies for the Evaluation of Complex Programs
470(1)
2.2 Creative Approaches for the Definition and Use of Counterfactuals
471(1)
2.3 Strengthening Quality Assurance and Threats to Validity Analysis
471(1)
2.4 Defining Minimum Acceptable Quality Standards for Conducting Evaluations Under Constraints
471(1)
2.5 Further Refinements to Program Theory
472(1)
2.6 Further Refinements to Mixed-Method Designs
472(1)
2.7 Integrating Big Data and Data Science Into Program Evaluation
472(1)
2.8 Further Work Is Required to Strengthen the Integration of a Gender-Responsive Approach Into Evaluation Programs
473(1)
2.9 The Road Ahead
473(1)
Glossary of Terms and Acronyms 474(13)
References 487(17)
Author Index 504(6)
Subject Index 510
Michael Bamberger has been involved in development evaluation for fifty years. Beginning in Latin America where he worked in urban community development and evaluation for over a decade, he became interested in the coping strategies of low-income communities, how they were affected by and how they influenced development efforts. Most evaluation research fails to capture these survival strategies, frequently underestimating the resilience of these communities particularly women and female-headed households. During 20 years with the World Bank he worked as monitoring and evaluation advisor for the Urban Development Department, evaluation training coordinator with the Economic Development Department and Senior Sociologist in the Gender and Development Department. After retiring from the Bank in 2001 he has worked as a development evaluation consultant with more than 10 UN agencies as well as development banks, bilateral development agencies, NGOs and foundations. Since 2001 he has been on the faculty of the International Program for Development Evaluation Training (IPDET). Recent publications include: (with Jim Rugh and Linda Mabry) RealWorld Evaluation: Working under budget, time, data and political constraints (2012 second edition); (with Marco Segone) How to design and manage equity focused evaluations (2011); Engendering Monitoring and Evaluation ( 2013 ); (with Linda Raftree) Emerging opportunities: Monitoring and evaluation in a tech-enabled world (2014); (with Marco Segone and Shravanti Reddy) How to integrate gender equality and social equity in national evaluation policies and systems (2014). Linda Mabry is a faculty member at Washington State University specializing in program evaluation, student assessment, and research and evaluation methodology. She currently serves as president of the Oregon Program Evaluation Network and on the editorial board for Studies in Educational Evaluation. She has served in a variety of leadership positions for the American Evaluation Association, including the Board of Directors, chair of the Task Force on Educational Accountability, and chair of the Theories of Evaluation topical interest group. She has also served n the Board of Trustees for the National Center for the Improvement of Educational Assessments and on the Performance Assessment Review Board of New York. She has conducted evaluations for the U.S. Department of Education, National Science Foundation, National Endowment for the Arts, the Jacob Javits Foundation, Hewlett-Packard Corporation, Ameritech Corporation, ATT-Comcast Corporation, the New York City Fund for Public Education, the Chicago Arts Partnerships in Education, the Chicago Teachers Academy of Mathematics and Science, and a variety of university, state, and school agencies. She has published in a number of scholarly journals and written several books, including Evaluation and the Postmodern Dilemma (1997) and Portfolios Plus: A Critical Guide to Performance Assessment (1999).