Muutke küpsiste eelistusi

Implementation of Large-Scale Education Assessments [Kõva köide]

Edited by , Edited by , Edited by , Edited by
  • Formaat: Hardback, 488 pages, kõrgus x laius x paksus: 231x155x25 mm, kaal: 726 g
  • Sari: Wiley Series in Survey Methodology
  • Ilmumisaeg: 21-Apr-2017
  • Kirjastus: John Wiley & Sons Inc
  • ISBN-10: 1118336097
  • ISBN-13: 9781118336090
Teised raamatud teemal:
  • Formaat: Hardback, 488 pages, kõrgus x laius x paksus: 231x155x25 mm, kaal: 726 g
  • Sari: Wiley Series in Survey Methodology
  • Ilmumisaeg: 21-Apr-2017
  • Kirjastus: John Wiley & Sons Inc
  • ISBN-10: 1118336097
  • ISBN-13: 9781118336090
Teised raamatud teemal:
Presents a comprehensive treatment of issues related to the inception, design, implementation and reporting of large-scale education assessments.

In recent years many countries have decided to become involved in international educational assessments to allow them to ascertain the strengths and weaknesses of their student populations. Assessments such as the OECD's Programme for International Student Assessment (PISA), the IEA's Trends in Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy (PIRLS) have provided opportunities for comparison between students of different countries on a common international scale.

This book is designed to give researchers, policy makers and practitioners a well-grounded knowledge in the design, implementation, analysis and reporting of international assessments. Readers will be able to gain a more detailed insight into the scientific principles employed in such studies allowing them to make better use of the results. The book will also give readers an understanding of the resources needed to undertake and improve the design of educational assessments in their own countries and regions.

Implementation of Large-Scale Education Assessments:





Brings together the editors extensive experience in creating, designing, implementing, analysing and reporting results on a wide range of assessments. Emphasizes methods for implementing international studies of student achievement and obtaining highquality data from cognitive tests and contextual questionnaires. Discusses the methods of sampling, weighting, and variance estimation that are commonly encountered in international large-scale assessments. Provides direction and stimulus for improving global educational assessment and student learning Is written by experts in the field, with an international perspective.

Survey researchers, market researchers and practitioners engaged in comparative projects will all benefit from the unparalleled breadth of knowledge and experience in large-scale educational assessments gathered in this one volume.
Notes on Contributors xv
Foreword xvii
Acknowledgements xx
Abbreviations xxi
1 Implementation of Large-Scale Education Assessments
1(25)
Petra Lietz
John C. Cresswell
Keith F. Rust
Raymond J. Adams
1.1 Introduction
1(2)
1.2 International, Regional and National Assessment Programmes in Education
3(1)
1.3 Purposes of LSAs in Education
4(6)
1.3.1 Trend as a Specific Purpose of LSAs in Education
8(2)
1.4 Key Areas for the Implementation of LSAs in Education
10(6)
1.5 Summary and Outlook
16(10)
Appendix 1.A
18(4)
References
22(4)
2 Test Design and Objectives
26(37)
Dara Ramalingam
2.1 Introduction
26(1)
2.2 PISA
27(7)
2.2.1 Purpose and Guiding Principles
27(1)
2.2.2 Target Population
27(1)
2.2.3 Sampling Approach
28(1)
2.2.4 Assessment Content
29(1)
2.2.5 Test Design
29(1)
2.2.6 Link Items
30(4)
2.3 TIMSS
34(7)
2.3.1 Purpose and Guiding Principles
34(1)
2.3.2 Target Population
34(2)
2.3.3 Sampling Approach
36(1)
2.3.4 Assessment Content
36(2)
2.3.5 Test Design
38(3)
2.4 PIRLS and Pre-PIRLS
41(4)
2.4.1 Assessment Content
41(1)
2.4.2 Test Design
42(3)
2.5 ASER
45(7)
2.5.1 Purpose and Guiding Principles
45(1)
2.5.2 Target Population
46(1)
2.5.3 Sampling Approach
47(1)
2.5.4 Assessment Content
48(1)
2.5.5 Test Design
49(3)
2.6 SACMEQ
52(4)
2.6.1 Purpose and Guiding Principles
52(1)
2.6.2 Target Population
53(1)
2.6.3 Sampling Approach
53(1)
2.6.4 Assessment Content
54(1)
2.6.5 Test Design
55(1)
2.7 Conclusion
56(7)
References
58(5)
3 Test Development
63(29)
Juliette Mendelovits
3.1 Introduction
63(2)
3.2 Developing an Assessment Framework: A Collaborative and Iterative Process
65(3)
3.2.1 What Is an Assessment Framework?
66(1)
3.2.2 Who Should Develop the Framework?
67(1)
3.2.3 Framework Development as an Iterative Process
67(1)
3.3 Generating and Collecting Test Material
68(4)
3.3.1 How Should Assessment Material Be Generated?
69(1)
3.3.2 Who Should Contribute the Material?
69(2)
3.3.3 Processing Contributions of Assessment Material
71(1)
3.4 Refinement of Test Material
72(9)
3.4.1 Panelling of Test Material by Test Developers
73(1)
3.4.2 Panelling Stimulus
73(1)
3.4.3 Panelling Items
74(1)
3.4.4 Cognitive Interviews and Pilot Studies
75(2)
3.4.5 Preparations for Trial Testing
77(1)
3.4.6 Analysis of Trial Test Data
78(3)
3.5 Beyond Professional Test Development: External Qualitative Review of Test Material
81(5)
3.5.1 Jurisdictional Representatives
81(2)
3.5.2 Domain Experts
83(1)
3.5.3 The Commissioning Body
84(1)
3.5.4 Dealing with Diverse Views
84(2)
3.6 Introducing Innovation
86(4)
3.6.1 Case Study 1: The Introduction of Digital Reading in PISA 2009
87(2)
3.6.2 Case Study 2: The Introduction of New Levels of Described Proficiency to PISA in 2009 and 2012
89(1)
3.7 Conclusion
90(2)
References
90(2)
4 Design, Development and Implementation of Contextual Questionnaires in Large-Scale Assessments
92(45)
Petra Lietz
4.1 Introduction
92(1)
4.2 The Role of Questionnaires in LSAs
93(2)
4.3 Steps in Questionnaire Design and Implementation
95(20)
4.3.1 Management of Questionnaire Development Process and Input from Relevant Stakeholders
95(1)
4.3.2 Clarification of Aims and Content Priorities
96(11)
4.3.3 Development of Questionnaires
107(2)
4.3.4 Permissions (Copyright/IP) Requests
109(1)
4.3.5 Cognitive Interviews with Respondents from the Target Population
109(2)
4.3.6 Cultural/Linguistic Adaptations to Questionnaires
111(1)
4.3.7 Ethics Application to Approach Schools and Students
112(1)
4.3.8 Field Trial Questionnaire Administration
112(1)
4.3.9 Analyses of Field Trial Data to Finalise the Questionnaire
113(1)
4.3.10 Selection of Material for the Final MS Questionnaire
114(1)
4.3.11 MS Questionnaire Administration
114(1)
4.3.12 Preparation of Questionnaire Data for Public Release
114(1)
4.4 Questions and Response Options in LSAs
115(4)
4.5 Alternative Item Formats
119(9)
4.6 Computer-Based/Online Questionnaire Instruments
128(3)
4.6.1 Field Trial of Computer-Based Questionnaires
129(2)
4.6.2 Beta Testing
131(1)
4.7 Conclusion and Future Perspectives
131(6)
Acknowledgements
132(1)
References
132(5)
5 Sample Design, Weighting, and Calculation of Sampling Variance
137(31)
Keith F. Rust
Sheila Krawchuk
Christian Monseur
5.1 Introduction
137(1)
5.2 Target Population
138(6)
5.2.1 Target Population and Data Collection Levels
138(1)
5.2.2 Target Populations of Major Surveys in Education
139(4)
5.2.3 Exclusion
143(1)
5.3 Sample Design
144(4)
5.3.1 Multistage Sample Design
144(1)
5.3.2 Unequal Probabilities of Selection
145(1)
5.3.3 Stratification and School Sample Size
146(1)
5.3.4 School Nonresponse and Replacement Schools
147(1)
5.4 Weighting
148(5)
5.4.1 Reasons for Weighting
148(1)
5.4.2 Components of the Final Student Weight
149(1)
5.4.3 The School Base Weight
150(1)
5.4.4 The School Base Weight Trimming Factor
151(1)
5.4.5 The Within-School Base Weight
151(1)
5.4.6 The School Nonresponse Adjustment
152(1)
5.4.7 The Student Nonresponse Adjustment
152(1)
5.4.8 Trimming the Student Weights
153(1)
5.5 Sampling Adjudication Standards
153(3)
5.5.1 Departures from Standards Arising from Implementation
155(1)
5.6 Estimation of Sampling Variance
156(12)
5.6.1 Introduction
156(1)
5.6.2 Methods of Variance Estimation for Complex Samples
157(1)
5.6.3 Replicated Variance Estimation Procedures for LSA Surveys
158(7)
5.6.4 Computer Software for Variance Estimation
165(1)
5.6.5 Concluding Remarks
165(1)
References
166(2)
6 Translation and Cultural Appropriateness of Survey Material in Large-Scale Assessments
168(25)
Steve Dept
Andrea Ferrari
Beatrice Halleux
6.1 Introduction
168(1)
6.2 Overview of Translation/Adaptation and Verification Approaches Used in Current Multilingual Comparative Surveys
169(5)
6.2.1 The Seven Guiding Principles
170(2)
6.2.2 Components from Current Localisation Designs
172(2)
6.3 Step-by-Step Breakdown of a Sophisticated Localisation Design
174(10)
6.3.1 Developing the Source Version(s)
174(8)
6.3.2 Translation/Adaptation
182(1)
6.3.3 Linguistic Quality Control: Verification and Final Check
182(2)
6.4 Measuring the Benefits of a Good Localisation Design
184(6)
6.4.1 A Work in Progress: Proxy Indicators of Translation/ Adaptation Quality
186(1)
6.4.2 The Focused MS Localisation Design
187(3)
6.5 Checklist of Requirements for a Robust Localisation Design
190(3)
References
191(2)
7 Quality Assurance
193(12)
John C. Cresswell
7.1 Introduction
193(1)
7.2 The Development and Agreement of Standardised Implementation Procedures
194(2)
7.3 The Production of Manuals which Reflect Agreed Procedures
196(1)
7.4 The Recruitment and Training of Personnel in Administration and Organisation: Especially the Test Administrator and the School Coordinator
197(1)
7.5 The Quality Monitoring Processes: Recruiting and Training Quality Monitors to Visit National Centres and Schools
198(3)
7.5.1 National Quality Monitors
198(1)
7.5.2 School-Level Quality Monitors
199(2)
7.6 Other Quality Monitoring Procedures
201(3)
7.6.1 Test Administration Session Reports
201(1)
7.6.2 Assessment Review Procedures
202(1)
7.6.3 Checking Print Quality (Optical Check)
202(1)
7.6.4 Post-final Optical Check
202(1)
7.6.5 Data Adjudication Processes
202(2)
7.7 Conclusion
204(1)
Reference
204(1)
8 Processing Responses to Open-Ended Survey Questions
205(26)
Ross Turner
8.1 Introduction
205(2)
8.2 The Fundamental Objective
207(1)
8.3 Contextual Factors: Survey Respondents and Items
207(7)
8.4 Administration of the Coding Process
214(7)
8.4.1 Design and Management of a Coding Process
215(3)
8.4.2 Handling Survey Materials
218(1)
8.4.3 Management of Data
218(1)
8.4.4 Recruitment and Training of Coding Personnel
219(2)
8.5 Quality Assurance and Control: Ensuring Consistent and Reliable Coding
221(8)
8.5.1 Achieving and Monitoring Between-Coder Consistency
223(2)
8.5.2 Monitoring Consistency across Different Coding Operations
225(4)
8.6 Conclusion
229(2)
References
229(2)
9 Computer-Based Delivery of Cognitive Assessment and Questionnaires
231(22)
Maurice Walker
9.1 Introduction
231(1)
9.2 Why Implement Computer-Based Assessments?
232(6)
9.2.1 Assessment Framework Coverage
233(1)
9.2.2 Student Motivation
233(1)
9.2.3 Control of Workflow
234(3)
9.2.4 Resource Efficiency
237(1)
9.3 Implementation of International Comparative Computer-Based Assessments
238(6)
9.3.1 Internet Delivery
238(3)
9.3.2 Portable Application
241(1)
9.3.3 Live System
242(2)
9.4 Assessment Architecture
244(3)
9.4.1 Test-Taker Registration
244(1)
9.4.2 Navigation Architecture
245(1)
9.4.3 Assessment Interface
245(2)
9.4.4 Aspect Ratio
247(1)
9.4.5 Accessibility Issues
247(1)
9.5 Item Design Issues
247(3)
9.5.1 Look and Feel
248(1)
9.5.2 Digital Literacy
248(1)
9.5.3 Translation
249(1)
9.6 State-of-the-Art and Emerging Technologies
250(1)
9.7 Summary and Conclusion
250(3)
References
251(2)
10 Data Management Procedures
253(23)
Falk Brese
Mark Cockle
10.1 Introduction
253(1)
10.2 Historical Review: From Data Entry and Data Cleaning to Integration into the Entire Study Process
254(1)
10.3 The Life Cycle of a LSA Study
255(1)
10.4 Standards for Data Management
256(2)
10.5 The Data Management Process
258(14)
10.5.1 Collection of Sampling Frame Information and Sampling Frames
260(1)
10.5.2 School Sample Selection
261(1)
10.5.3 Software or Web-Based Solutions for Student Listing and Tracking
262(1)
10.5.4 Software or Web-Based Solutions for Within-School Listing and Sampling Procedures
263(2)
10.5.5 Adaptation and Documentation of Deviations from International Instruments
265(1)
10.5.6 The Translation Verification Process
266(1)
10.5.7 Data Collection from Respondents
267(5)
10.6 Outlook
272(4)
References
274(2)
11 Test Implementation in the Field: The Case of PASEC
276(22)
Oswald Koussihouede
Antoine Marivin
Vanessa Sy
11.1 Introduction
276(2)
11.2 Test Implementation
278(16)
11.2.1 Human Resources
278(1)
11.2.2 Sample Size and Sampling
278(1)
11.2.3 PASEC's Instruments
279(10)
11.2.4 Cultural Adaptation and Linguistic Transposition of the Instruments
289(1)
11.2.5 Preparation of Administrative Documents
289(1)
11.2.6 Document Printing and Supplies Purchase
289(1)
11.2.7 Recruitment of Test Administrators
290(1)
11.2.8 Training, Preparation and Implementation
290(2)
11.2.9 Test Administration
292(2)
11.2.10 Supervision of the Field Work
294(1)
11.2.11 Data Collection Report
294(1)
11.3 Data Entry
294(1)
11.4 Data Cleaning
295(1)
11.5 Data Analysis
295(1)
11.6 Governance and Financial Management of the Assessments
295(3)
Acknowledgments
296(1)
References
297(1)
12 Test Implementation in the Field: The Experience of Chile in International Large-Scale Assessments
298(25)
Ema Lagos Campos
12.1 Introduction
298(4)
12.2 International Studies in Chile
302(21)
12.2.1 Human Resources Required in the National Centre
302(2)
12.2.2 Country Input into Instruments and Tests Development
304(1)
12.2.3 Sampling
305(2)
12.2.4 Preparation of Test Materials
307(2)
12.2.5 Preparation and Adaptation of Administrative Documents (Manuals)
309(1)
12.2.6 Preparation of Field Work
310(2)
12.2.7 Actual Field Work
312(3)
12.2.8 Coding Paper and Computer-Based Test
315(3)
12.2.9 Data Entry Process
318(1)
12.2.10 Report Writing
318(2)
12.2.11 Dissemination
320(1)
12.2.12 Final Words
320(1)
Annex A
321(1)
References
321(2)
13 Why Large-Scale Assessments Use Scaling and Item Response Theory
323(34)
Alla Berezner
Raymond J. Adams
13.1 Introduction
323(2)
13.2 Item Response Theory
325(4)
13.2.1 Logits and Scales
327(1)
13.2.2 Choosing an IRT Model
328(1)
13.3 Test Development and Construct Validation
329(16)
13.4 Rotated Test Booklets
345(2)
13.5 Comparability of Scales Across Settings and Over Time
347(2)
13.6 Construction of Performance Indicators
349(5)
13.7 Conclusion
354(3)
References
354(3)
14 Describing Learning Growth
357(27)
Ross Turner
Raymond J. Adams
14.1 Background
357(1)
14.2 Terminology: The Elements of a Learning Metric
358(2)
14.3 Example of a Learning Metric
360(1)
14.4 Issues for Consideration
360(5)
14.4.1 Number of Descriptions or Number of Levels
360(2)
14.4.2 Mapping Domain Content onto the Scale
362(1)
14.4.3 Alternative Approaches to Mapping Content to the Metric
363(2)
14.5 PISA Described Proficiency Scales
365(9)
14.5.1 Stage 1: Identifying Scales and Possible Subscales
366(3)
14.5.2 Stage 2: Assigning Items to Subscales
369(1)
14.5.3 Stage 3: Skills Audit
370(1)
14.5.4 Stage 4: Analysing Preliminary Trial Data
371(3)
14.5.5 Stage 5: Describing the Dimension
374(1)
14.5.6 Stage 6: Revising and Refining with Final Survey Data
374(1)
14.6 Defining and Interpreting Proficiency Levels
374(5)
14.7 Use of Learning Metrics
379(5)
Acknowledgement
380(1)
References
381(3)
15 Scaling of Questionnaire Data in International Large-Scale Assessments
384(27)
Wolfram Schulz
15.1 Introduction
384(2)
15.2 Methodologies for Construct Validation and Scaling
386(1)
15.3 Classical Item Analysis
387(1)
15.4 Exploratory Factor Analysis
388(1)
15.5 Confirmatory Factor Analysis
389(3)
15.6 IRT Scaling
392(4)
15.7 Described IRT Questionnaire Scales
396(3)
15.8 Deriving Composite Measures of Socio-economic Status
399(5)
15.9 Conclusion and Future Perspectives
404(7)
References
405(6)
16 Database Production for Large-Scale Educational Assessments
411(13)
Eveline Gebhardt
Alla Berezner
16.1 Introduction
411(1)
16.2 Data Collection
412(4)
16.3 Cleaning, Recoding and Scaling
416(2)
16.4 Database Construction
418(3)
16.5 Assistance
421(3)
References
423(1)
17 Dissemination and Reporting
424(12)
John C. Cresswell
17.1 Introduction
424(1)
17.2 Frameworks
425(1)
17.2.1 Assessment Frameworks
425(1)
17.2.2 Questionnaire Frameworks
426(1)
17.3 Sample Items
426(1)
17.4 Questionnaires
427(1)
17.5 Video
427(1)
17.6 Regional and International Reports
428(1)
17.7 National Reports
428(1)
17.8 Thematic Reports
429(1)
17.9 Summary Reports
429(1)
17.10 Analytical Services and Support
430(1)
17.11 Policy Papers
430(1)
17.12 Web-Based Interactive Display
431(1)
17.13 Capacity-Building Workshops
432(1)
17.14 Manuals
432(1)
17.15 Technical Reports
432(1)
17.16 Conclusion
433(3)
References
433(3)
Index 436
Edited by Petra Lietz and John C. Cresswell, Australian Council for Educational Research (ACER), Australia

Keith F. Rust, Westat and the University of Maryland at College Park, USA

Raymond J. Adams, Australian Council for Educational Research (ACER), Australia