Open Access. Powered by Scholars. Published by Universities.®

Education Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 3 of 3

Full-Text Articles in Education

Extending An Irt Mixture Model To Detect Random Responders On Non-Cognitive Polytomously Scored Assessments, Mandalyn R. Swanson May 2015

Extending An Irt Mixture Model To Detect Random Responders On Non-Cognitive Polytomously Scored Assessments, Mandalyn R. Swanson

Dissertations, 2014-2019

This study represents an attempt to distinguish two classes of examinees – random responders and valid responders – on non-cognitive assessments in low-stakes testing. The majority of existing literature regarding the detection of random responders in low-stakes settings exists in regard to cognitive tests that are dichotomously scored. However, evidence suggests that random responding occurs on non-cognitive assessments, and as with cognitive measures, the data derived from such measures are used to inform practice. Thus, a threat to test score validity exists if examinees’ response selections do not accurately reflect their underlying level on the construct being assessed. As with …


Measuring The Outliers: An Introduction To Out-Of-Level Testing With High-Achieving Students, Karen Rambo-Hernandez, Russell Warne Feb 2015

Measuring The Outliers: An Introduction To Out-Of-Level Testing With High-Achieving Students, Karen Rambo-Hernandez, Russell Warne

Russell T Warne

Out-of-level testing is an underused strategy for addressing the needs of students who score in the extremes, and when used wisely, it could provide educators with a much more accurate picture of what students know. Out-of-level testing has been shown to be an effective assessment strategy with high-achieving students; however, out-of-level testing has not been shown to work well with low-achieving students. This article provides a brief history of out-of-level testing, along with guidelines for using it.


Improving Irt Parameter Estimates With Small Sample Sizes: Evaluating The Efficacy Of A New Data Augmentation Technique, Brett P. Foley Jul 2010

Improving Irt Parameter Estimates With Small Sample Sizes: Evaluating The Efficacy Of A New Data Augmentation Technique, Brett P. Foley

College of Education and Human Sciences: Dissertations, Theses, and Student Research

The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using several variations of DupER Augmentation (based on different imputation methodologies, deletion rates, and duplication rates), analyzed in BILOG-MG 3, and results are compared to those obtained from analyzing the raw data. Additional manipulated variables include test length and sample size. Estimates are compared using seven different …