Muutke küpsiste eelistusi

E-raamat: Handbook of Automated Scoring: Theory into Practice

Edited by (Educational Testing Service, Princeton, New Jersey, USA), Edited by (University of Colorado, Boulder, USA), Edited by
  • Formaat - EPUB+DRM
  • Hind: 59,79 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

"Automated scoring engines [ …] require a careful balancing of the contributions of technology, NLP, psychometrics, artificial intelligence, and the learning sciences. The present handbook is evidence that the theories, methodologies, and underlying technology that surround automated scoring have reached maturity, and that there is a growing acceptance of these technologies among experts and the public."

From the Foreword by Alina von Davier, ACTNext Senior Vice President

Handbook of Automated Scoring: Theory into Practice

provides a scientifically grounded overview of the key research efforts required to move automated scoring systems into operational practice. It examines the field of automated scoring from the viewpoint of related scientific fields serving as its foundation, the latest developments of computational methodologies utilized in automated scoring, and several large-scale real-world applications of automated scoring for complex learning and assessment systems. The book is organized into three parts that cover (1) theoretical foundations, (2) operational methodologies, and (3) practical illustrations, each with a commentary. In addition, the handbook includes an introduction and synthesis chapter as well as a cross-chapter glossary.

Arvustused

'the Handbook of Automated Scoring is an excellent resource for understanding the theoretical, methodological and practical components of automated scoring. It provides a good foundation for understanding the considerations behind how assessments are designed and detailed methodological information about how to best create these kinds of systems. Part 3 that contains different illustrations of how to best design these systems is especially useful for students who are learning more about how these systems should work when implemented correctly.'

- Magdalen Beiting-Parrish and Jay Verkuilen, International Statistical Review, 2021

Foreword xi
Editors xv
List of Contributors
xvii
1 The Past, Present, and Future of Automated Scoring
1(12)
Peter W. Foltz
Duanli Yan
Andre A. Rupp
Part I Theoretical Foundations
2 Cognitive Foundations of Automated Scoring
13(16)
Malcolm I. Bauer
Diego Zapata-Rivera
3 Assessment Design with Automated Scoring in Mind
29(20)
Kristen DiCerbo
Emily Lai
Matthew Ventura
4 Human Scoring with Automated Scoring in Mind
49(20)
Edward W. Wolfe
5 Natural Language Processing for Writing and Speaking
69(24)
Aoife Cahill
Keelan Evanini
6 Multimodal Analytics for Automated Assessment
93(20)
Sidney K. D'Mello
7 International Applications of Automated Scoring
113(20)
Mark D. Shermis
8 Public Perception and Communication around Automated Essay Scoring
133(18)
Scott W. Wood
9 An Evidentiary-Reasoning Perspective on Automated Scoring: Commentary on Part I
151(20)
Robert J. Mislevy
Part II Operational Methodologies
10 Operational Human Scoring at Scale
171(24)
Kathryn L. Ricker-Pedley
Susan Hines
Carolyn Connelly
11 System Architecture Design for Scoring and Delivery
195(22)
Sue Lottridge
Nick Hoefer
12 Design and Implementation for Automated Scoring Systems
217(24)
Christina Schneider
Michelle Boyer
13 Quality Control for Automated Scoring in Large-Scale Assessment
241(22)
Dan Shaw
Brad Bolender
Rick Meisner
14 A Seamless Integration of Human and Automated Scoring
263(20)
Kyle Habermehl
Aditya Nagarajan
Scott Dooley
15 Deep Learning Networks for Automated Scoring Applications
283(14)
Saad M. Khan
Yuchi Huang
16 Validation of Automated Scoring Systems
297(22)
Duanli Yan
Brent Bridgeman
17 Operational Considerations for Automated Scoring Systems: Commentary on Part II
319(10)
David M. Williamson
Part III Practical Illustrations
18 Expanding Automated Writing Evaluation
329(18)
Jill Burstein
Brian Riordan
Daniel McCaffrey
19 Automated Writing Process Analysis
347(18)
Paul Deane
Mo Zhang
20 Automated Scoring of Extended Spontaneous Speech
365(18)
Klaus Zechner
Anastassia Loukina
21 Conversation-Based Learning and Assessment Environments
383(20)
Arthur C. Graesser
Xiangen Hu
Vasile Rus
Zhiqiang Cai
22 Automated Scoring in Intelligent Tutoring Systems
403(20)
Robert J. Mislevy
Duanli Yan
Janice Gobert
Michael Sao Pedro
23 Scoring of Streaming Data in Game-Based Assessments
423(22)
Russell G. Almond
24 Automated Scoring in Medical Licensing
445(24)
Melissa J. Margolis
Brian E. Clauser
25 At the Birth of the Future: Commentary on Part III
469(6)
John T. Behrens
26 Theory into Practice: Reflections on the Handbook
475(14)
Andre A. Rupp
Peter W. Foltz
Duanli Yan
Glossary 489(12)
References 501(52)
Index 553
Duanli Yan is Director of Data Analysis and Computational Research in the Psychometrics, Statistics, and Data Sciences area at the Educational Testing Service (ETS), and Adjunct Professor at Fordham University and Rutgers University. She is a co-author of Bayesian Networks in Educational Assessment and Computerized Adaptive and Multistage Testing with R, editor for Practical Issues and Solutions for Computerized Multistage Testing, and co-editor for Computerized Multistage Testing: Theory and Applications. Her awards include the 2016 AERA Division D Significant Contribution to Educational Measurement and Research Methodology Award.

André A. Rupp is Research Director in the Psychometrics, Statistics, and Data Sciences area at the Educational Testing Service (ETS). He is co-author and co-editor of two award-winning interdisciplinary books titled Diagnostic Measurement: Theory, Methods, and Applications and The Handbook of Cognition and Assessment: Frameworks, Methodologies, and Applications. His synthesis- and framework-oriented research has appeared in a wide variety of prestigious peer-reviewed journals. He currently serves as the lead developer of the ITEMS professional development portal for NCME.

Peter W. Foltz is Vice President in Pearson's AI and Products Solutions Organization and Research Professor at the University of Colorados Institute of Cognitive Science. His work covers machine learning and natural language processing for educational and clinical assessments, discourse processing, reading comprehension and writing skills, 21st-century skills learning, and large-scale data analytics. He has authored more than 150 journal articles, book chapters, and conference papers, as well as multiple patents.