Muutke küpsiste eelistusi

E-raamat: Practice of Health Program Evaluation

  • Formaat: EPUB+DRM
  • Ilmumisaeg: 16-Sep-2015
  • Kirjastus: SAGE Publications Inc
  • Keel: eng
  • ISBN-13: 9781483376387
  • Formaat - EPUB+DRM
  • Hind: 85,22 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 16-Sep-2015
  • Kirjastus: SAGE Publications Inc
  • Keel: eng
  • ISBN-13: 9781483376387

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This guide outlines the practice of health program evaluation, describing how to develop evaluation questions, choose evaluation designs, analyze cost-effectiveness, design evaluations of program implementation, choose populations and sample members, address measurement and data collection issues, analyze data, and use evaluations in decision making. This edition has more emphasis on program theory and causal inference and more on the creation and application of conceptual models to evaluate health programs, as well as conceptual models for program implementation and their relationships to program theory. It also has new information on the role of stakeholders in the evaluation process, ethical issues, conducting evaluations in a cultural context, study designs for impact evaluation of population-based interventions, the application of epidemiology and biostatistics, and mixed methods that combine quantitative and qualitative approaches. Annotation ©2015 Ringgold, Inc., Portland, OR (protoview.com)

The Practice of Health Program Evaluation, Second Edition provides students with the methods to evaluate health programs and the expertise to navigate the political terrain so as to work more effectively with decision makers and other groups. David Grembowski uses the metaphor of evaluation being a three-act play with a variety of actors and interest groups, each having a role that involves entering and exiting the "stage" at different points in the evaluation process. The first section (Act I) shows evaluators how to work with decision makers and other groups to define the question they want answered about a program and how to develop evaluation questions. Act II covers the methods for selecting among one or more evaluation designs to answer questions about the program. And, Act III covers the use of the answers, including methods for developing formal dissemination plans, and factors that influence whether evaluation findings are used or not. Two major areas that receive more attention in the Second Edition are program theory and causal inference. The new edition also includes more on: the role of stakeholders; ethical issues; the cultural context; study designs; the application of epidemiology and biostatistics; mixed methods; and reporting guidelines. The author has also added "Rules of Thumb" boxes, offering guidance on the practice of evaluation. 

 

Arvustused

"Ive used this textbook with public health graduate students for years, and both the students and I appreciate the practical, organized, accessible approach. . . . [ The] Second Edition reflects important developments in the evaluation field . . . [ and] is a book that students will want to keep on their bookshelves long after graduating, and will find themselves consulting over the course of their careers." -- Amanda S. Birnbaum "This text identifies the key areas facing todays public health researchers, practitioners, and students." -- Leah Christina Neubauer Praise for the Previous Edition





"A well-organized and readable text on evaluating health programs. It covers the essentials of choosing an evaluation design, planning and conducting the evaluation, and using the results of the evaluation." -- Ronald Andersen "I found many instances where I thought students would get key concepts and ideas more quickly than they would from other texts. I was thrilled to see a discussion of ethics and culture in the text. The author provides clear explanations of important concepts and uses examples throughout the text."  -- Robin Lin Miller

Acknowledgments xiii
About the Author xv
Preface xvii
Prologue xxi
1 Health Program Evaluation: Is It Worth it?
1(16)
Growth of Health Program Evaluation
4(4)
Types of Health Program Evaluation
8(7)
Evaluation of Health Programs
8(1)
Evaluation of Health Systems
9(6)
Summary
15(1)
List of Terms
15(1)
Study Questions
15(2)
2 The Evaluation Process as a Three-Act Play
17(28)
Evaluation as a Three-Act Play
19(14)
Act I Asking the Questions
19(8)
Act II Answering the Questions
27(4)
Act III Using the Answers in Decision Making
31(2)
Roles of the Evaluator
33(4)
Evaluation in a Cultural Context
37(3)
Ethical Issues
40(1)
Evaluation Standards
41(3)
Summary
44(1)
List of Terms
44(1)
Study Questions
44(1)
ACT I ASKING THE QUESTIONS
45(34)
3 Developing Evaluation Questions
47(32)
Step 1 Specify Program Theory
48(18)
Conceptual Models: Theory of Cause and Effect
51(4)
Conceptual Models: Theory of Implementation
55(11)
Step 2 Specify Program Objectives
66(3)
Step 3 Translate Program Theory and Objectives Into Evaluation Questions
69(4)
Step 4 Select Key Questions
73(4)
Age of the Program
73(1)
Budget
73(1)
Logistics
73(1)
Knowledge and Values
74(1)
Consensus
75(1)
Result Scenarios
75(1)
Funding
76(1)
Evaluation Theory and Practice
76(1)
Assessment of Fit
77(1)
Summary
78(1)
List of Terms
78(1)
Study Questions
78(1)
ACT II ANSWERING THE QUESTIONS
79(98)
Scene 1 Developing the Evaluation Design to Answer the Questions
79(2)
4 Evaluation of Program Impacts
81(54)
Quasi-Experimental Study Designs
85(24)
One-Group Posttest-Only Design
85(1)
One-Group Pretest-Posttest Design
86(2)
Posttest-Only Comparison Group Design
88(2)
Recurrent Institutional Cycle ("Patched-Up") Design
90(2)
Pretest-Posttest Nonequivalent Comparison Group Design
92(2)
Single Time-Series Design
94(3)
Repeated Treatment Design
97(1)
Multiple Time-Series Design
98(2)
Regression Discontinuity Design
100(5)
Summary: Quasi-Experimental Study Designs and Internal Validity
105(4)
Counterfactuals and Experimental Study Designs
109(10)
Counterfactuals and Causal Inference
109(4)
Prestest-Posttest Control Group Design
113(1)
Posttest-Only Control Group Design
114(1)
Solomon Four-Group Design
115(1)
Randomized Study Designs for Population-Based Interventions
116(1)
When to Randomize
117(1)
Closing Remarks
118(1)
Statistical Threats to Validity
119(3)
Generalizability of Impact Evaluation Results
122(8)
Construct Validity
122(5)
External Validity
127(2)
Closing Remarks
129(1)
Evaluation of Impact Designs and Meta-Analysis
130(2)
Summary
132(1)
List of Terms
133(1)
Study Questions
134(1)
5 Cost-Effectiveness Analysis
135(20)
Cost-Effectiveness Analysis: An Aid to Decision Making
136(2)
Comparing Program Costs and Effects: The Cost-Effectiveness Ratio
138(2)
Types of Cost-Effectiveness Analysis
140(4)
Cost-Effectiveness Studies of Health Programs
140(2)
Cost-Effectiveness Evaluations of Health Services
142(2)
Steps in Conducting a Cost-Effectiveness Analysis
144(7)
Steps 1--4 Organizing the CEA
144(2)
Step 5 Identifying, Measuring, and Valuing Costs
146(1)
Step 6 Identifying and Measuring Effectiveness
147(1)
Step 7 Discounting Future Costs and Effectiveness
148(2)
Step 8 Conducting a Sensitivity Analysis
150(1)
Step 9 Addressing Equity Issues
151(1)
Steps 10 and 11 Using CEA Results in Decision Making
151(1)
Evaluation of Program Effects and Costs
151(1)
Summary
152(1)
List of Terms
153(1)
Study Questions
153(2)
6 Evaluation of Program Implementation
155(22)
Types of Evaluation Designs for Answering Implementation Questions
157(8)
Quantitative and Qualitative Methods
158(2)
Multiple and Mixed Methods Designs
160(5)
Types of Implementation Questions and Designs for Answering Them
165(11)
Monitoring Program Implementation
165(2)
Explaining Program Outcomes
167(9)
Summary
176(1)
List of Terms
176(1)
Study Questions
176(1)
ACT II ANSWERING THE QUESTIONS
177(88)
Scenes 2 and 3 Developing the Methods to Carry Out the Design and Conducting the Evaluation
177(6)
7 Population and Sampling
183(30)
Step 1 Identify the Target Populations of the Evaluation
184(1)
Step 2 Identify the Eligible Members of Each Target Population
185(4)
Step 3 Decide Whether Probability or Nonprobability Sampling Is Necessary
189(2)
Step 4 Choose a Nonprobability Sampling Design for Answering an Evaluation Question
191(2)
Step 5 Choose a Probability Sampling Design for Answering an Evaluation Question
193(5)
Simple and Systematic Random Sampling
193(1)
Proportionate Stratified Sampling
194(1)
Disproportionate Stratified Sampling
195(1)
Post-Stratification Sampling
196(1)
Cluster Sampling
197(1)
Step 6 Determine Minimum Sample Size Requirements
198(12)
Sample Size in Qualitative Evaluations
198(1)
Types of Sample Size Calculations
199(1)
Sample Size Calculations for Descriptive Questions
200(2)
Sample Size Calculations for Comparative Questions
202(8)
Step 7 Select the Sample
210(1)
Summary
210(1)
List of Terms
211(1)
Study Questions
211(2)
8 Measurement and Data Collection
213(34)
Measurement and Data Collection in Quantitative Evaluations
215(21)
The Basics of Measurement and Classification
215(2)
Step 1 Decide What Concepts to Measure
217(1)
Step 2 Identify Measures of the Concepts
218(2)
Step 3 Assess the Reliability, Validity, and Responsiveness of the Measures
220(9)
Step 4 Identify and Assess the Data Source of Each Measure
229(5)
Step 5 Choose the Measures
234(1)
Step 6 Organize the Measures for Data Collection and Analysis
234(1)
Step 7 Collect the Measures
235(1)
Data Collection in Qualitative Evaluations
236(3)
Reliability and Validity in Qualitative Evaluations
239(2)
Management of Data Collection
241(1)
Summary
242(1)
Resources
242(2)
List of Terms
244(1)
Study Questions
245(2)
9 Data Analysis
247(18)
Getting Started: What's the Question?
248(2)
Qualitative Data Analysis
250(2)
Quantitative Data Analysis
252(11)
What Are the Variables for Answering Each Question?
252(2)
How Should the Variables Be Analyzed?
254(9)
Summary
263(1)
List of Terms
263(1)
Study Questions
264(1)
ACT III USING THE ANSWERS IN DECISION MAKING
265(54)
10 Disseminating the Answers to Evaluation Questions
267(22)
Scene 1 Translating Evaluation Answers Back Into Policy Language
268(5)
Translating the Answers
268(1)
Building Knowledge
269(1)
Developing Recommendations
270(3)
Scene 2 Developing a Dissemination Plan for Evaluation Answers
273(9)
Target Audience and Type of Information
275(1)
Format of Information
275(5)
Timing of the Information
280(1)
Setting
281(1)
Scene 3 Using the Answers in Decision Making and the Policy Cycle
282(1)
How Answers Are Used by Decision Makers
282(5)
Increasing the Use of Answers in the Evaluation Process
284(2)
Ethical Issues
286(1)
Summary
287(1)
List of Terms
287(1)
Study Questions
287(2)
11 Epilogue
289(30)
Compendium
291(4)
References
295(24)
Index 319
David Grembowski, Ph.D., M.A., is a professor in the Department of Health Ser- vices in the School of Public Health and the Department of Oral Health Sciences in the School of Dentistry, and adjunct professor in the Department of Sociology, at the University of Washington. He has taught health program evaluation to graduate students for more than twenty years. His evaluation interests are prevention, the performance of health programs and health care systems, survey research methods, and the social determinants of population health. His other work has examined efforts to improve quality by increasing access to care in integrated delivery systems; pharmacy outreach to provide statins preventively to patients with diabetes; managed care and physician referrals; managed care and patient-physician relationships and physician job satisfaction; cost-effectiveness of preventive services for older adults; cost-sharing and seeing out-of-network physicians; social gradients in oral health; local health department spending and racial/ethnic disparities in mortality rates; fluoridation effects on oral health and dental demand; financial incentives and dentist adoption of preventive technologies; effects of dental insurance on dental demand; and the link between mother and child access to dental care.