Muutke küpsiste eelistusi

E-raamat: Evaluation Fundamentals: Insights into Program Effectiveness, Quality, and Value

  • Formaat: EPUB+DRM
  • Ilmumisaeg: 23-Jul-2024
  • Kirjastus: SAGE Publications Inc
  • Keel: eng
  • ISBN-13: 9781483355221
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 70,40 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 23-Jul-2024
  • Kirjastus: SAGE Publications Inc
  • Keel: eng
  • ISBN-13: 9781483355221
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This guide focuses on the fundamentals of evaluating a program's effectiveness, quality, and value in health, nursing, medicine, psychology, criminal justice, social work, and education fields. It covers evaluation purposes, uses, methods, frameworks, and models; evaluation questions and evidence of merit; design; sampling; collecting data and reviewing the literature; evaluation measures; managing and analyzing data; and reports. This edition has new examples; an expanded definition of evaluation to reflect recent thinking about quality, safety, equity, accessibility, and value; discussion of how to design and report on evaluations that provide information on improvement and effectiveness and program quality and value; two new frameworks; guidelines for searching for and evaluating the quality of the literature in online libraries; standards for selecting the best available evidence; a guide to comparative effectiveness program evaluation and its similarities and differences from others; a new section on ethics for online evaluations and surveys; an expanded section on qualitative methods; discussion of mixed methods research; a guide to presenting results in poster form; and discussion and the rationale for using standardized reporting checklists, such as CONSORT (CONsolidated Standards of Reporting Trials) and TREND (Transparent Reporting of Evaluations with Nonrandomized Designs). Annotation ©2014 Ringgold, Inc., Portland, OR (protoview.com)

The book teaches the basic concepts and vocabulary necessary to do program evaluations and to review the quality of evaluation research so as to make informed decisions about methods and outcomes.

Arvustused

"The text is extremely well organized. . . The concepts are easy to follow and the explanations are excellent." -- Sharon K. Drake, Iowa State University "Dr. Fink did an outstanding job of introducing concepts and practice of program evaluation in a way that anyone can understand. Those who are interested in program evaluation will enjoy this book regardless of their field of study." -- Young Ik Cho, University of Wisconsin, Milwaukee "The key strength of this book is the introduction of the key concepts, terms, and considerations that are needed for an evaluation study.  These introductions are concise, understandable, and in logical order." -- Richard C. Maurer, University of Kentucky

Preface xiii
What Is New in the Third Edition xiv
About the Author xvi
1 Program Evaluation: A Prelude 3(36)
A Reader's Guide to
Chapter 1
3(1)
What Is Program Evaluation?
4(3)
The Program or Intervention
4(1)
Program Objectives and Outcomes
5(1)
Program Characteristics
5(1)
Program Impact
6(1)
Program Costs
6(1)
Program Quality
6(1)
Program Value
7(1)
Evaluation Methods
7(8)
Evaluation Questions and Hypotheses
8(1)
Evidence of Merit: Effectiveness, Quality, Value
9(1)
Designing the Evaluation
10(1)
Selecting Participants for the Evaluation
11(1)
Collecting Data on Program Merit
11(1)
Managing Data So That It Can Be Analyzed
12(1)
Analyzing Data to Decide on Program Merit
12(1)
Reporting on Effectiveness, Quality, and Value
12(3)
Who Uses Evaluations?
15(1)
Baseline Data, Formative Evaluation, and Process Evaluation
15(4)
Baseline Data
15(1)
Interim Data and Formative Evaluation
16(2)
Process or Implementation Evaluation
18(1)
Summative Evaluation
19(1)
Qualitative Evaluation
19(1)
Mixed-Methods Evaluation
20(2)
Participatory and Community-Based Evaluation
22(2)
Evaluation Frameworks and Models
24(6)
The PRECEDE-PROCEED Framework
24(2)
RE-AIM
26(1)
The Centers for Disease Control's Framework for Planning and Implementing Practical Program Evaluation
26(1)
Logic Models
27(2)
Right-to-Left Logic Model
29(1)
Left-to-Right Logic Model
30(1)
Evaluation Reports Online
30(3)
Summary and Transition to the Next
Chapter on Evaluation Questions and Evidence of Program Merit
33(1)
Exercises
34(2)
References and Suggested Readings
36(1)
Suggested Websites
36(3)
2 Evaluation Questions and Evidence of Merit 39(28)
A Reader's Guide to
Chapter 2
39(1)
Evaluation Questions and Hypotheses
40(7)
Evaluation Questions: Program Goals and Objectives
40(3)
Evaluation Questions: Participants
43(1)
Evaluation Questions: Program Characteristics
44(1)
Evaluation Questions: Financial Costs
44(2)
Evaluation Questions: The Program's Environment
46(1)
Evidence of Merit
47(11)
Sources of Evidence
49(1)
Evidence by Comparison
49(3)
Evidence From Expert Consultation: Professionals, Consumers, Community Groups
52(3)
Evidence From Existing Data and Large Databases ("Big Data")
55(2)
Evidence From the Research Literature
57(1)
When to Decide on Evidence
58(2)
Program Evaluation and Economics
60(1)
The QEV Report: Questions, Evidence, Variables
61(2)
Summary and Transition to the Next
Chapter on Designing Program Evaluations
63(1)
Exercises
64(1)
References and Suggested Readings
64(3)
3 Designing Program Evaluations 67(34)
A Reader's Guide to
Chapter 3
67(1)
Evaluation Design: Creating the Structure
68(2)
Experimental Designs
70(11)
The Randomized Controlled Trial or RCT
70(5)
Parallel Controls
72(1)
Wait-List Controls
73(2)
Factorial Designs
75(2)
Randomizing and Blinding
77(4)
Random Assignment
77(1)
Random Clusters
78(1)
Improving on Chance: Stratifying and Blocking
79(1)
Blinding
80(1)
Nonrandomized Controlled Trials
81(2)
Parallel Controls
81(1)
The Problem of Incomparable Participants: Statistical Methods Like ANCOVA to the Rescue
82(1)
Observational Designs
83(4)
Cross-Sectional Designs
83(2)
Cohort Designs
85(1)
Case Control Designs
85(1)
A Note on Pretest-Posttest Only or Self-Controlled Designs
86(1)
Comparative Effectiveness Research and Evaluation
87(3)
Commonly Used Evaluation Designs
90(1)
Internal and External Validity
91(3)
Internal Validity Is Threatened
92(1)
External Validity Is Threatened
93(1)
Summary and Transition to the Next
Chapter on Sampling
94(1)
Exercises
94(2)
References and Suggested Readings
96(5)
4 Sampling 101(18)
A Reader's Guide to
Chapter 4
101(1)
What Is a Sample?
101(1)
Why Sample?
102(1)
Inclusion and Exclusion Criteria or Eligibility
103(1)
Sampling Methods
104(6)
Simple Random Sampling
105(1)
Random Selection and Random Assignment
105(2)
Systematic Sampling
107(1)
Stratified Sampling
107(2)
Cluster Sampling
109(1)
Nonprobability or Convenience Sampling
110(1)
The Sampling Unit
110(1)
Sample Size
111(1)
Power Analysis and Alpha and Beta Errors
111(3)
The Sampling Report
114(1)
Summary and Transition to the Next
Chapter on Collecting Information
115(1)
Exercises
116(1)
References and Suggested Readings
117(1)
Suggested Websites
117(2)
5 Collecting Information: The Right Data Sources 119(28)
A Reader's Guide to
Chapter 5
119(1)
Information Sources: What's the Question?
119(2)
Choosing Appropriate Data Sources
121(2)
Data Sources or Measures in Program Evaluation and Their Advantages and Disadvantages
123(12)
Self-Administered Surveys
123(2)
Achievement Tests
125(1)
Record Reviews
125(3)
Observations
128(1)
Interviews
129(1)
Computer-Assisted Interviews
130(1)
Physical Examinations
131(1)
Large Databases
131(2)
Vignettes
133(1)
The Literature
134(1)
Guidelines for Reviewing the Literature
135(7)
Assemble the Literature
136(1)
Identify Inclusion and Exclusion Criteria
136(1)
Select the Relevant Literature
137(1)
Identify the Best Available Literature
138(1)
Abstract the Information
139(1)
Consider the Non-Peer-Reviewed Literature
139(3)
Summary and Transition to the Next
Chapter on Evaluation Measures
142(1)
Exercises
142(2)
References and Suggested Readings
144(3)
6 Evaluation Measures 147(18)
A Reader's Guide to
Chapter 6
147(1)
Reliability and Validity
148(3)
Reliability
148(2)
Validity
150(1)
A Note on Language: Data Collection Terms
151(1)
Checklist for Creating a New Measure
152(4)
Checklist for Selecting an Already Existing Measure
156(2)
The Measurement Chart: Logical Connections
158(3)
Summary and Transition to the Next
Chapter on Managing Evaluation Data
161(1)
Exercises
162(1)
References and Suggested Readings
163(2)
7 Managing Evaluation Data 165(22)
A Reader's Guide to
Chapter 7
165(1)
Managing Evaluation Data Management: The Road to Data Analysis
165(2)
Drafting an Analysis Plan
167(2)
Creating a Codebook or Data Dictionary
169(5)
Establishing Reliable Coding
172(1)
Measuring Agreement: The Kappa
172(2)
Entering the Data
174(1)
Searching for Missing Data
175(5)
What to Do When Participants Omit Information
178(2)
Cleaning the Data
180(1)
Outliers
180(1)
When Data Are in Need of Recoding
181(1)
Creating the Final Database for Analysis
181(2)
Storing and Archiving the Database
183(1)
Summary and Transition to the Next
Chapter on Data Analysis
183(1)
Exercises
183(2)
References and Suggested Readings
185(2)
8 Analyzing Evaluation Data 187(30)
A Reader's Guide to
Chapter 8
187(1)
A Suitable Analysis: Starting With the Evaluation Questions
188(1)
Measurement Scales and Their Data
188(3)
Categorical Data
188(1)
Ordinal Data
189(1)
Numerical Data
190(1)
Selecting a Method of Analysis
191(3)
Hypothesis Testing and p Values: Statistical Significance
194(3)
Guidelines for Hypothesis Testing, Statistical Significance, and p Values
195(2)
Clinical or Practical Significance: Using Confidence Intervals
197(3)
Establishing Clinical or Practical Significance
198(2)
Risks and Odds
200(4)
Odds Ratios and Relative Risk
201(3)
Qualitative Evaluation Data: Content Analysis
204(6)
Assembling the Data
205(1)
Learning the Contents of the Data
206(1)
Creating a Codebook or Data Dictionary
207(2)
Entering and Cleaning the Data
209(1)
Doing the Analysis
209(1)
Meta-Analysis
210(2)
Summary and Transition to the Next
Chapter on Evaluation Reports
212(1)
Exercises
212(2)
References and Suggested Readings
214(3)
9 Evaluation Reports 217(45)
A Reader's Guide to
Chapter 9
217(1)
The Written Evaluation Report
218(14)
Composition of the Report
220(8)
Introduction
220(1)
Methods
221(1)
Results
222(5)
Conclusions or Discussion
227(1)
Recommendations
228(1)
The Abstract
228(2)
The Executive Summary
230(2)
Reviewing the Report for Quality and Ethics
232(1)
Oral Presentations
232(9)
Recommendations for Slide Presentations
235(6)
Posters
241(1)
Ethical Evaluations
242(1)
Evaluations That Need Institutional Review or Ethics Board Approval
242(1)
Evaluations That Are Exempt From IRB Approval
243(6)
What the IRB Will Review
244(1)
Informed Consent
245(4)
The Internet and Ethical Evaluations
249(2)
Communication Between the Evaluator and the Participant
249(1)
Communication Between the Participant and the Website
249(1)
Communication Between the Website and the Evaluator
250(1)
Data Protection
250(1)
Sample Questionnaire: Maintaining Ethically Sound Online Data Collection
251(2)
Example: Consent Form for an Online Survey
253(1)
Research Misconduct
254(1)
Exercises
255(6)
Suggested Websites
261(1)
Answers to Exercises 262(11)
Index 273
Arlene Fink (PhD) is Professor of Medicine and Public Health at the University of California, Los Angeles, and president of the Langley Research Institute. Her main interests include evaluation and survey research and the conduct of research literature reviews as well as the evaluation of their quality. Dr. Fink has conducted scores of evaluation studies in public health, medicine, and education. She is on the faculty of UCLAs Robert Wood Johnson Clinical Scholars Program and is a scientific and evaluation advisor to UCLAs Gambling Studies and IMPACT (Improving Access, Counseling & Treatment for Californians with Prostate Cancer) programs. She consults nationally and internationally for agencies such as Linstitut de Promotion del la Prévention Secondaire en Addictologie (IPPSA) in Paris, France, and Peninsula Health in Victoria, Australia. Professor Fink has taught and lectured extensively all over the world and is the author of more than 130 peer-reviewed articles and 15 textbooks.