This book turns the page on standard setting, calling for a time of change. Expanding Cizek & Earnest's (2016) evaluation framework, it delivers a comprehensive mixed-methods investigation of Ferrara & Lewis' (2012) Item Descriptor Matching method, using a multi-phase design across face-to-face and virtual workshops. At its core lies the author's Unified Alignment & Test Development (UATD) framework, which embeds a unique quantitative principled cut score approach to calculate defensible, trustworthy thresholds grounded in theory. Using corpus linguistics and AI models to analyse panel discussions, the book shows how innovative methodologies enhance the validity and robustness of CEFR-linking studies. It also translates the panelists' bottom-up and top-down strategies to tame the CEFR into innovative activities for the familiarisation stage. The result is an expanded, transparent, and forward-looking practice that strengthens the validity, fairness, and impact of standard setting.
1 Introduction - 2 Literature Review - 3 Background to the Study - 4
Methodology - 5 Procedural Validity - 6 Validating the Reading-into-Writing
Workshops - 7 Validating the Reading Workshops - 8 Calculating Cut Scores in
a Single-Level Examination - 9 Findings from Focus Group Interviews - 10
Discussion - 11 Synopsis of Study - 12 Contribution
Paraskevi (Voula) Kanistra is Director of English Language Assessment at Trinity College London. She has nearly thirty years of experience in language assessment, including work as an examiner trainer, test developer, and assessment lead, with a strong focus on test design, validation, and quality assurance across international contexts. Her professional expertise centres on CEFR alignment, standard setting, validation, and assessment innovation. She has served as Treasurer of EALTA and has acted as a reviewer for journals and conferences.