Open Access. Powered by Scholars. Published by Universities.®

Education Commons

Open Access. Powered by Scholars. Published by Universities.®

2015

Social and Behavioral Sciences

Theses/Dissertations

Dissertations, 2014-2019

Item response theory

Articles 1 - 2 of 2

Full-Text Articles in Education

Examining The Performance Of The Metropolis-Hastings Robbins-Monro Algorithm In The Estimation Of Multilevel Multidimensional Irt Models, Bozhidar M. Bashkov May 2015

Examining The Performance Of The Metropolis-Hastings Robbins-Monro Algorithm In The Estimation Of Multilevel Multidimensional Irt Models, Bozhidar M. Bashkov

Dissertations, 2014-2019

The purpose of this study was to review the challenges that exist in the estimation of complex (multidimensional) models applied to complex (multilevel) data and to examine the performance of the recently developed Metropolis-Hastings Robbins-Monro (MH-RM) algorithm (Cai, 2010a, 2010b), designed to overcome these challenges and implemented in both commercial and open-source software programs. Unlike other methods, which either rely on high-dimensional numerical integration or approximation of the entire multidimensional response surface, MH-RM makes use of Fisher’s Identity to employ stochastic imputation (i.e., data augmentation) via the Metropolis-Hastings sampler and then apply the stochastic approximation method of Robbins and Monro …


Extending An Irt Mixture Model To Detect Random Responders On Non-Cognitive Polytomously Scored Assessments, Mandalyn R. Swanson May 2015

Extending An Irt Mixture Model To Detect Random Responders On Non-Cognitive Polytomously Scored Assessments, Mandalyn R. Swanson

Dissertations, 2014-2019

This study represents an attempt to distinguish two classes of examinees – random responders and valid responders – on non-cognitive assessments in low-stakes testing. The majority of existing literature regarding the detection of random responders in low-stakes settings exists in regard to cognitive tests that are dichotomously scored. However, evidence suggests that random responding occurs on non-cognitive assessments, and as with cognitive measures, the data derived from such measures are used to inform practice. Thus, a threat to test score validity exists if examinees’ response selections do not accurately reflect their underlying level on the construct being assessed. As with …