Open Access. Powered by Scholars. Published by Universities.®

Educational Assessment, Evaluation, and Research Commons

Open Access. Powered by Scholars. Published by Universities.®

Australian Council for Educational Research (ACER)

2021-2030 ACER Research Conferences

Measurement

Articles 1 - 4 of 4

Full-Text Articles in Educational Assessment, Evaluation, and Research

Rethinking Measurement For Accountable Assessment, Mark Wilson Aug 2021

Rethinking Measurement For Accountable Assessment, Mark Wilson

2021-2030 ACER Research Conferences

The underlying model for most formal educational measurement (e.g. standardised tests) is based on a very simple model: the student takes a test (possibly alongside other students). The complications of there being an instructional plan, actual instruction, interpretation of the outcome, and formulation of next steps, are all bypassed in considering how to model the process of measurement. There are some standard exceptions, of course: a pre-test/post-test context will involve two measurements, and attention to gain score, or similar. However, if we wish to design measurement to hold to Lehrer’s (2021) definition of ‘accountable assessment’ – as ‘actionable information for …


Interpreting Learning Progress Using Assessment Scores: What Is There To Gain?, Nathan Zoanetti Aug 2021

Interpreting Learning Progress Using Assessment Scores: What Is There To Gain?, Nathan Zoanetti

2021-2030 ACER Research Conferences

Using assessment scores to quantify gains and growth trajectories for individuals and groups can provide a valuable lens on learning progress for all students. This paper summarises some commonly observed patterns of progress and illustrates these using data from ACER’s Progressive Achievement Test (PAT) assessments. While growth trajectory measurement requires scores for the same individuals over at least three but preferably more occasions, scores from only two occasions are naturally more readily available. The difference between two successive scores is usually referred to as gain. Some common approaches and pitfalls when interpreting individual student gain data are illustrated. It is …


Accountable Assessment, Richard Lehrer Aug 2021

Accountable Assessment, Richard Lehrer

2021-2030 ACER Research Conferences

There is widespread agreement about the importance of accounting for the extent to which educational systems advance student learning. Yet, the forms and formats of accountable assessments often ill serve students and teachers; the summative judgements of student performance that are typically employed to indicate proficiencies on benchmarks of student learning commonly fail to capture student performance in ways that are specific and actionable for teachers. Timing is another key barrier to the utility of summative assessment. In the US, summative evaluations occur at the end of the school year and may serve future students, but do not help teachers …


Identifying And Monitoring Progress In Collaboration Skills, Claire Scoular Aug 2021

Identifying And Monitoring Progress In Collaboration Skills, Claire Scoular

2021-2030 ACER Research Conferences

The nature of skills such as collaboration is complex, particularly given that there are internal processes at play. Inferences need to be made to interpret explicit behaviours observed from intentionally designed assessment tasks. This paper centres on the approach to develop hypotheses of skill development into validated learning progressions using assessment data. Understanding a skill from a growth perspective is essential for the effective teaching and development of the skill. The application of Item Response Theory (IRT) allows the interpretation of assessment data as levels of proficiency that we can use to map or monitor progress in collaborative skills.