Muutke küpsiste eelistusi

E-raamat: Assessment in Health Professions Education

Edited by , Edited by , Edited by (University of Illinois, USA)
  • Formaat: 346 pages
  • Ilmumisaeg: 26-Jul-2019
  • Kirjastus: Routledge
  • Keel: eng
  • ISBN-13: 9781000649970
  • Formaat - PDF+DRM
  • Hind: 74,09 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 346 pages
  • Ilmumisaeg: 26-Jul-2019
  • Kirjastus: Routledge
  • Keel: eng
  • ISBN-13: 9781000649970

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Assessment in Health Professions Education, 2nd Edition, provides a comprehensive guide for educators in the health professions—medicine, dentistry, nursing, pharmacy and allied health fields. This second edition has been extensively revised and updated by leaders in the field. Part I of the book presents an introduction to assessment fundamentals and their theoretical underpinnings from the perspective of the health professions. Part II covers specific assessment methods, with a focus on validity, best practices, challenges, and practical guidelines for the effective implementation of successful assessment programs. Part III addresses special topics and recent innovative approaches, including narrative assessment, situational judgment tests, programmatic assessment, mastery learning settings, and the Key Features approach. This accessible text addresses the essential concepts for the health professions educator and provides the background needed to understand, interpret, develop, and effectively implement assessment methods.

Arvustused

"Ensuring health professionals are competent is crucial. The second edition of this publication is a timely, authoritative reference work that belongs on the bookshelves of anyone involved in assessment."

--Trudie E. Roberts, President of the Association of Medical Education in Europe (AMEE) and Professor and Director of the Leeds Institute of Medical Education, University of Leeds, UK.

"This book serves as a foundation for assessment practice and research. The second edition is comprehensive and updates the previous edition with the most recent literature. I recommend that all of my colleagues and students in graduate programs have this revision on their bookshelves."

--John J. Norcini, President and Chief Executive Officer, Foundation for Advancement of International Medical Education and Research (FAIMER)

"Assessment in Health Professions Education is a treasure of a book. Comprehensive, accessible, and wonderfully well-written, it can be used as an introductory text and as a reference work. I find it remarkable how all authors manage to make rather complex aspects seem so easy and accessible. I thoroughly enjoyed reading it and am going to keep it at the most prominent place on my bookshelves."

--Lambert Schuwirth, Professor of Medical Education at Flinders University, Australia

"Assessment of/for/as learning is a complex and fundamental area in health professions education, and the second edition of Assessment in Health Professions Education emerges as a great resource in the field. We have used the first edition as the main reference in our Master in Health Sciences Education program Assessment course, and we can vouch for its usefulness as a scholarly resource for students, teachers and researchers."

--Melchor Sánchez, Professor of Medical Education and Coordinator of the Master in Health Sciences Education Program at National Autonomous University of Mexico, Mexico.

List of Figures
xvii
Foreword xix
Georges Bordage
Preface xxi
PART I FUNDAMENTALS OF ASSESSMENT
1(106)
Chapter 1 Introduction to Assessment in the Health Professions
3(14)
Rachel Yudkowsky
Yoon Soo Park
Steven M. Downing
George Miller's Pyramid
4(1)
Four Major Assessment Methods
5(2)
Written Tests
5(1)
Oral Examinations
6(1)
Performance Tests
6(1)
Workplace-Based Assessment (Clinical Observational Methods)
7(1)
Narrative Assessments and Portfolios
7(1)
Some Basic Terms and Definitions
7(7)
Competency-Based Education
7(2)
Instruction and Assessment
9(1)
Assessment, Measurement, and Tests
10(1)
Types of Numbers
10(1)
Fidelity to the Criterion
11(1)
Formative and Summative Assessment
12(1)
Norm- and Criterion-Referenced Measurement
12(1)
High-Stakes and Low-Stakes Assessments
13(1)
Large-Scale and Local or Small-Scale Assessments
13(1)
Translational Science in Education
14(1)
Summary
15(1)
References
15(2)
Chapter 2 Validity and Quality
17(16)
Matthew Lineberry
The Principle of Purpose-Driven Assessment
17(1)
Understanding Your Purposes
18(1)
Tying Methods to Your Purposes
19(1)
Investigating Validity, Evaluating Quality
20(1)
Kane's Validity Framework: The Validity Argument
21(3)
Messick's Validity Framework: Sources of Evidence
24(3)
Evaluation Frameworks
27(1)
Taking a Prevention Focus: Understanding Threats to Validity
27(5)
Concluding Thoughts
32(1)
References
32(1)
Chapter 3 Reliability
33(18)
Yoon Soo Park
Introduction
33(1)
Theoretical Framework for Reliability
34(1)
An Analogy
34(1)
Classical Test Theory (CTT)
34(1)
Illustrative Example
35(1)
Implications
35(1)
Reliability Indices: Test-Based Methods
35(6)
1 Test-Retest Reliability
36(2)
2 Split-Half Reliability
38(1)
3 Internal-Consistency Reliability: Cronbach's Alpha
39(2)
Reliability Indices: Raters
41(1)
Conceptual Basis for Inter-Rater Reliability
41(1)
Illustrative Example: Inter-Rater Reliability
41(1)
Implications: Inter-Rater Reliability
41(1)
Standard Error of Measurement (SEM)
42(1)
Illustrative Example
42(1)
Implications
43(1)
How to Increase Reliability
43(1)
Projections in Reliability: Spearman-Brown Formula
43(1)
Illustrative Example
44(1)
Implications
44(1)
Composite Scores and Composite Score Reliability
44(1)
Illustrative Example
44(1)
Implications
45(1)
Summary
45(2)
Appendix: Supplement---Composite Scores and Composite Score Reliability
47(1)
Objective
47(1)
What We Know
47(1)
I Calculating the Composite Score Reliability
48(1)
II Calculating the Composite Scores of the Three Exemplar Students
49(1)
References
50(1)
Chapter 4 Generalizability Theory
51(19)
Clarence D. Kreiter
Nikki L. Zaidi
Yoon Soo Park
Introductory Comments
51(1)
Background and Overview
51(2)
The Hypothetical Measurement Problem---An Example
53(1)
Defining the G Study Model
53(1)
Obtaining G Study Results
54(1)
Interpreting G Study Results
55(1)
Conducting the D Study
56(2)
Interpreting the D Study
58(1)
G and D Study Model Variations
59(1)
Unbalanced Designs
60(1)
Multivariate Generalizability
61(2)
Additional Considerations
63(1)
Final Considerations
64(1)
Appendix 4.1 Statistical Foundations of a Generalizability Study
65(2)
Appendix 4.2 Statistical Foundations of a Decision Study
67(1)
Appendix 4.3 Software Syntax for Estimating Variance Components From Table 4.1 Data
68(1)
SPSS Syntax for Estimating Variance Components From Table 4.1 Data
68(1)
SAS Syntax for Estimating Variance Components From Table 4.1 Data
68(1)
GENOVA Syntax for Estimating Variance Components From Table 4.1 Data
68(1)
References
69(1)
Chapter 5 Statistics of Testing
70(16)
Steven M. Downing
Dorthea Juul
Yoon Soo Park
Introduction
70(1)
Using Test Scores
70(5)
Basic Score Types
70(1)
Number Correct Scores or Raw Scores
71(1)
Percent-Correct Scores
71(1)
Derived Scores or Standard Scores
72(1)
Normalized Standard Scores
73(1)
Percentiles
73(1)
Corrections for Guessing (Formula Scores)
73(1)
Equated Scores
74(1)
Composite Scores
74(1)
Correlation and Disattenuated Correlation
75(1)
Item Analysis
76(4)
Item Analysis Report for Each Test Item
76(1)
Item Difficulty
77(1)
Item Discrimination
77(1)
Discrimination Indices
78(1)
Point Biserial Correlation as Discrimination Index
78(1)
What Is Good Item Discrimination?
78(1)
General Recommendations for Item Difficulty and Item Discrimination
79(1)
Item Options
80(1)
Number of Examinees Needed for Item Analysis
80(1)
Summary Statistics for a Test
80(1)
Useful Formulas
81(1)
Item Response Theory
82(1)
Summary
82(1)
Appendix: Some Useful Formulas With Example Calculations
83(2)
Kuder-Richardson Formula 21 Reliability Estimate
83(1)
Standard Error of Measurement (SEM)
83(1)
Spearman-Brown Prophecy Formula
84(1)
Disattenuated Correlation: Correction for Attenuation
84(1)
References
85(1)
Chapter 6 Standard Setting
86(21)
Rachel Yudkowsky
Steven M. Downing
Ara Tekian
Introduction
86(1)
Eight Steps for Standard Setting
87(5)
Step 1 Select a Standard-Setting Method
87(1)
Step 2 Select Judges
88(1)
Step 3 Prepare Descriptions of Performance Categories
88(1)
Step 4 Train Judges
89(1)
Step 5 Collect Ratings or Judgments
89(1)
Step 6 Provide Feedback and Facilitate Discussion
90(1)
Step 7 Evaluate the Standard-Setting Procedure
90(1)
Step 8 Provide Results, Consequences, and Validity Evidence to Final Decision Makers
91(1)
Special Topics in Standard Setting
92(2)
Combining Standards Across Components of an Examination: Compensatory vs. Non-Compensatory Standards
92(1)
Setting Standards for Performance Tests
92(1)
Setting Standards for Clinical Procedures
93(1)
Setting Standards for Oral Exams, Essays, and Portfolios
93(1)
Setting Standards in Mastery Learning Settings
93(1)
Multiple Category Cut Scores
93(1)
Setting Standards Across Institutions
93(1)
Seven Methods for Setting Performance Standards
94(9)
The Angoff Method
94(2)
The Ebel Method
96(1)
The Hofstee Method
97(2)
Borderline Group Method
99(1)
Contrasting Groups Method
100(2)
Body of Work Method
102(1)
Patient Safety Method
102(1)
Conclusion
103(1)
Acknowledgements
104(1)
References
104(3)
PART II ASSESSMENT METHODS
107(90)
Chapter 7 Written Tests: Writing High-Quality Constructed-Response and Selected-Response Items
109(18)
Miguel Paniagua
Kimberly A. Swygert
Steven M. Downing
Introduction
109(1)
Assessment Using Written Tests
109(2)
CR and SR Item Formats: Definition
111(2)
Constructed-Response Items: Short-Answer vs. Long-Answer Formats
113(1)
Constructed-Response Items: Scoring
114(1)
Scoring Rubrics
114(2)
Human Raters of CR Tasks
116(1)
Computer- Based Ratings of CR Tasks
117(1)
Constructed-Response Items: Threats to Score Validity
117(1)
Selected- Response Items
118(6)
SR Item Formats: General Guidelines for Writing MCQs
118(2)
SR Item Formats: Avoiding Known MCQ Flaws
120(2)
SR Item Formats: Number of MCQ Options
122(1)
SR Item Formats: MCQ Scoring Methods
122(1)
SR Item Formats: Non-MCQ Formats
123(1)
Summary and Conclusion
124(1)
References
125(2)
Chapter 8 Oral Examinations
127(14)
Dorthea Juul
Rachel Yudkowsky
Ara Tekian
Oral Examinations Around the World
127(2)
Threats to the Validity of Oral Examinations
129(1)
Structured Oral Examinations
130(2)
Scoring and Standard Setting
132(2)
Preparation of the Examinee
134(1)
Selection, Training, and Evaluation of the Examiners
134(2)
Quality Assurance
136(1)
After the Examination
136(1)
Cost
137(1)
Summary
137(1)
References
137(4)
Chapter 9 Performance Tests
141(19)
Rachel Yudkowsky
Strengths of Performance Tests
141(1)
Defining the Purpose of the Test
142(1)
Standardized Patients
143(2)
Scoring the Performance
145(3)
Training Raters
148(1)
Pilot Testing the Case
148(1)
Multiple-Station Performance Tests: The Objective Structured Clinical Exam (OSCE)
148(2)
Scoring an OSCE: Combining Scores Across Stations
150(1)
Standard Setting
150(1)
Logistics
151(1)
Threats to the Validity of Performance Tests
151(5)
Consequential Validity: Educational Impact
156(1)
Conclusion
156(1)
References
157(3)
Chapter 10 Workplace-Based Assessment
160(13)
Mary E. McBride
Mark D. Adler
William C. McGaghie
Workplace-Based Assessment
160(1)
Competency-Based Medical Education
161(1)
Blueprinting
162(1)
Workplace-Based Assessment
163(5)
Assessment Tools
164(2)
The Rater and Learner Dyad
166(2)
Environment
168(1)
Assessment Administration
168(1)
Making Sense of Assessment Data
169(1)
Psychometric
169(1)
Socio-Cultural
169(1)
Summary and Look Ahead
170(1)
References
170(3)
Chapter 11 Narrative Assessment
173(8)
Nancy Dudek
David Cook
Narrative Assessment Instruments
173(1)
Strengths
174(1)
Pragmatic Issues in Using Narrative Assessment
174(3)
1 Define the Purpose of Narrative Assessment
174(1)
2 Document the Performance
175(1)
3 Train the Observers
175(1)
4 Manage the Collected Data
175(1)
5 Combine Narratives Across Observers (Qualitative Synthesis)
176(1)
6 Provide Feedback to Trainees
176(1)
Evaluating the Validity of Narrative Assessment
177(1)
Summary
178(1)
References
179(2)
Chapter 12 Assessment Portfolios
181(16)
Daniel J. Schumacher
Ara Tekian
Rachel Yudkowsky
Portfolio Design
181(2)
Formative Purposes of Portfolios: Learner Driven, Mentor Supported
183(1)
The Importance of Mentors in Portfolio-Facilitated Lifelong Learning
184(1)
Summative Purposes of Portfolios: Learner Driven, Supervisor Evaluated
185(1)
Addressing Threats to Validity
186(2)
Potential Challenges of Portfolio Use
188(2)
Case Examples
190(2)
Summary
192(1)
References
193(4)
PART III SPECIAL TOPICS
197(110)
Chapter 13 Key Features Approach
199(9)
Georges Bordage
Gordon Page
Selecting KF Problems
199(1)
Defining KFs
200(1)
Preparing KF Test Material
201(1)
Response Formats
201(1)
Scoring
201(1)
KF Test Scores
202(4)
Construct Underrepresentation
206(1)
Construct-Irrelevant Variance
206(1)
Acknowledgements
206(1)
References
206(2)
Chapter 14 Simulations in Assessment
208(21)
Luke A. Devine
William C. McGaghie
Barry Issenberg
Introduction
208(1)
What Is Simulation and Why Use It?
209(1)
When to Use Simulation in Assessment
209(1)
How to Use Simulation in Assessment
210(13)
Determine Learning Outcome(s)
210(2)
Choose an Assessment Method
212(2)
Choose a Simulation Modality
214(3)
Develop Assessment Scenario
217(2)
Score the Assessment
219(2)
Set Standards for the Assessment
221(2)
Standardize Assessment Conditions
223(1)
Threats to Validity of Simulation-Based Assessments
223(2)
Faculty Development Needs
225(1)
Consequences and Educational Impact
225(1)
Conclusion
226(1)
References
226(3)
Chapter 15 Situational Judgment Tests
229(16)
Harold I. Reiter
Christopher Roberts
Prologue
229(1)
Exercise 15.1
229(2)
SJT Example 1
229(1)
SJT Example 2
230(1)
Exercise 15.2
231(1)
Case Study 1
231(1)
Case Study 2
231(1)
Case Study 3
231(1)
Recent History of SJT Development for Assessment in Health Professions Education
231(2)
A Need Is Recognized
231(1)
Two Worlds Collide
232(1)
Defining Key Components of SJTs and Desired Outcomes
233(2)
Converting Research Data Into Practice
233(1)
Key Components
233(2)
Designing a Situational Judgment Test
235(5)
Desirable Outcomes
235(1)
Phase I How Toxic Is It? Potentiating Diversity, aka Construct Specificity
235(2)
Phase II Does It Work? Identifying Better- and Lower-Performing Learners aka Construct Sensitivity
237(2)
Phase III Should I Use It? Real-World Considerations
239(1)
Answers to Exercise 15.2
240(1)
Case Study 1
240(1)
Case Study 2
240(1)
Case Study 3
240(1)
Epilogue
241(1)
References
241(4)
Chapter 16 Programmatic Assessment: An Avenue to a Different Assessment Culture
245(12)
Cees van der Vleuten
Sylvia Heeneman
Suzanne Schut
The Traditional Approach to Assessment
245(1)
Programmatic Assessment
246(5)
1 Pass/Fail Decisions Are Not Based on a Single Data Point
247(1)
2 The Program Includes a Deliberate Mix of Different Assessment Methods
247(1)
3 Feedback Use and Self-Directed Learning Are Promoted Through a Continuous Dialogue With the Learner
248(1)
4 The Number of Data Points Needed Is Proportionally Related to the Stakes of the Assessment Decision
248(1)
5 High-Stakes Decisions Are Professional Judgments Made by a Committee of Assessors
248(3)
Evaluation of Programmatic Assessment
251(2)
Conclusion
253(1)
References
253(4)
Chapter 17 Assessment Affecting Learning
257(15)
Matthew Lineberry
Reconsidering Key Concepts and Terms
258(2)
Mechanisms of Action in Assessment for Learning
260(1)
Mechanism of Action # 1: Course Development
260(2)
Mechanism of Action #2: Anticipation of an Assessment Event
262(2)
Mechanism of Action #3: The Assessment Event Itself
264(1)
Mechanism of Action #4: Post-Assessment Reflection and Improvement
265(2)
Identifying Performance Gaps
265(1)
Generating New Approaches
266(1)
Applying and Reinforcing New Approaches
267(1)
Summary and Next Steps
267(1)
Pop Quiz Answers
268(1)
References
268(4)
Chapter 18 Assessment in Mastery Learning Settings
272(15)
Matthew Lineberry
Rachel Yudkowsky
Yoon Soo Park
David Cook
E. Matthew Ritter
Aaron Knox
Interpretations Of and Uses For Mastery Learning Assessments
272(3)
Sources of Validity Evidence: Content
275(1)
Sources of Validity Evidence: Response Process
276(1)
Sources of Validity Evidence: Internal Structure and Reliability
276(1)
Sources of Validity Evidence: Relationships to Other Variables
277(1)
Sources of Validity Evidence: Consequences of Assessment Use
277(6)
Standard Setting in Mastery Settings
278(4)
Other Validity Evidence Regarding Consequences
282(1)
Summary and Conclusions
283(1)
Acknowledgements
284(1)
References
284(3)
Chapter 19 Item Response Theory
287(11)
Yoon Soo Park
Introduction
287(1)
Classical Measurement Theory: Challenges in Sample-Dependent Inference
288(1)
Comparison Between CMT and IRT
288(1)
Item Response Theory: An Overview
289(4)
IRT Model: Logistic Parameter Model
289(1)
Item Characteristic Curve
289(2)
Item Difficulty
291(1)
Item Discrimination
291(1)
Features of IRT
291(1)
Assumptions for Conducting IRT
291(1)
Rasch Model
291(1)
Application of IRT: An Example
292(1)
Other Applications of IRT
293(3)
Computer-Adaptive Testing
293(2)
Test Equating
295(1)
Summary
296(1)
References
297(1)
Chapter 20 Engaging With Your Statistician
298(9)
Alan Schwartz
Yoon Soo Park
Introduction
298(1)
Planning for Data Analysis
298(3)
Finding the Appropriate Statistician
298(1)
When and What to Present to Your Statistician
299(2)
Collaborating During Analyses
301(3)
Talking Through the Analysis Plan
301(1)
Creating Data Files
301(2)
Exploratory Work Before Planned Analyses
303(1)
Exploratory Work After Planned Analyses
303(1)
Writing the Manuscript and Beyond
304(1)
Summary
304(1)
References
305(2)
List of Contributors 307(8)
Index 315
Rachel Yudkowsky is Professor and Director of Graduate Studies in the Department of Medical Education in the College of Medicine at the University of Illinois at Chicago, USA, and past Director of the UIC Dr. Allan L. and Mary L. Graham Clinical Performance Center.

Yoon Soo Park is Associate Professor and Associate Head of the Department of Medical Education, and Director of Research for Educational Affairs in the College of Medicine at the University of Illinois at Chicago, USA.

Steven M. Downing is Associate Professor, Emeritus, in the Department of Medical Education in the College of Medicine at the University of Illinois at Chicago, USA.