Open Access. Powered by Scholars. Published by Universities.®

Quantitative Psychology Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 10 of 10

Full-Text Articles in Quantitative Psychology

Are All Cognitive Items Equally Prone To Position Effects? Exploring The Relationships Among Item Features And Position Effects, Thai Quang Ong May 2019

Are All Cognitive Items Equally Prone To Position Effects? Exploring The Relationships Among Item Features And Position Effects, Thai Quang Ong

Dissertations, 2014-2019

One type of context effect is a position effect, which implies parameters of an item are influenced by the position of the item on the test. Researchers often discuss two types of position effects: negative position effects and positive position effects (e.g., Albano, 2013; Debeer & Janssen, 2013). Items exhibiting negative position effects become harder when placed later on the test, whereas items exhibiting positive position effects become easier when placed later on the test. Researchers have primarily examined the underlying causes of position effects through an item or person perspective (e.g., Bulut, 2015; Kingston & Dorans, 1984; Qian, 2014). …


The Influence Of Covariate Measurement Error On Treatment Effect Estimates And Numeric Balance Diagnostics Following Several Common Methods Of Propensity Score Matching: A Simulation Study, Heather D. Harris May 2018

The Influence Of Covariate Measurement Error On Treatment Effect Estimates And Numeric Balance Diagnostics Following Several Common Methods Of Propensity Score Matching: A Simulation Study, Heather D. Harris

Dissertations, 2014-2019

In applied intervention studies, researchers frequently aim to make inferences about the impact of a treatment program on participants. However, applied researchers are often faced with threats to the internal validity of their studies, or the extent to which changes in participants’ outcomes can be attributed to the intervention. When researchers are unable to randomly assign study participants to treatment conditions, changes in the intervention outcome might be confounded with systematic differences in participants’ baseline characteristics. Propensity score matching is one technique that allows researchers to account for threats to the internal validity of a study. Specifically, using propensity score …


In Search Of Equality: Developing An Equal Interval Likert Response Scale, Elisabeth M. Spratto May 2018

In Search Of Equality: Developing An Equal Interval Likert Response Scale, Elisabeth M. Spratto

Dissertations, 2014-2019

Attitude scales are an important component of educational and psychological research. One consideration when seeking to make valid inferences from attitudinal data is the issue of the degree to which response options can be assumed to have equal intervals. Many response options on attitudinal measures may produce ordinal-level data rather than interval. This poses a problem for the statistical tests that may be used, as many analyses assume interval-level data. It also poses an interpretational issue if the conceptual distance between response options is not the same – for example, if a researcher believes that someone who answered Agree differs …


Using Multiple Imputation To Mitigate The Effects Of Low Examinee Motivation On Estimates Of Student Learning, Kelly J. Foelber May 2017

Using Multiple Imputation To Mitigate The Effects Of Low Examinee Motivation On Estimates Of Student Learning, Kelly J. Foelber

Dissertations, 2014-2019

In higher education, we often collect data in order to make inferences about student learning, and ultimately, in order to make evidence-based changes to try to improve student learning. The validity of the inferences we make, however, depends on the quality of the data we collect. Low examinee motivation compromises these inferences; research suggests that low examinee motivation can lead to inaccurate estimates of examinees’ ability (e.g., Wise & DeMars, 2005). To obtain data that better represent what students know, think, and can do, practitioners must consider, and attempt to negate the effects of, low examinee motivation. The primary purpose …


You Only Live Up To The Standards You Set: An Evaluation Of Different Approaches To Standard Setting, Scott N. Strickman May 2017

You Only Live Up To The Standards You Set: An Evaluation Of Different Approaches To Standard Setting, Scott N. Strickman

Dissertations, 2014-2019

Interpretation of performance in reference to a standard can provide nuanced, finely-tuned information regarding examinee abilities beyond that of just a total score. However, there is a multitude of ways to set performance standards yet little guidance regarding which method operates best and under what circumstances. Traditional methods are the most common approach adopted in practice and heavily involve subject matter experts (SMEs). Two other approaches have been suggested in the literature as alternative ways to set performance standards, although they have yet to be implemented in practice. Data-driven approaches do not involve SMEs but rather rely solely upon statistical …


Applying Solution Behavior Thresholds To A Noncognitive Measure To Identify Rapid Responders: An Empirical Investigation, Mary M. Johnston May 2016

Applying Solution Behavior Thresholds To A Noncognitive Measure To Identify Rapid Responders: An Empirical Investigation, Mary M. Johnston

Dissertations, 2014-2019

Noncognitive measures are increasingly being used for accountability purposes in higher education (e.g., O. L. Liu, Frankel, & Roohr, 2014). Because these measures are often collected under low-stakes conditions, there is a concern students do not put forth their best effort when responding, which is problematic given previous research has found noneffortful responding can negatively impact the validity of results (e.g., Barry & Finney, 2009; Meade & Craig, 2012; Swerdzewski, Harmes, & Finney, 2011). Subsequently, there is a need to identify students displaying low effort on low-stakes noncognitive measures. One method, which is based on response time and can discreetly …


The Effects Of A Planned Missingness Design On Examinee Motivation And Psychometric Quality, Matthew S. Swain May 2015

The Effects Of A Planned Missingness Design On Examinee Motivation And Psychometric Quality, Matthew S. Swain

Dissertations, 2014-2019

Assessment practitioners in higher education face increasing demands to collect assessment and accountability data to make important inferences about student learning and institutional quality. The validity of these high-stakes decisions is jeopardized, particularly in low-stakes testing contexts, when examinees do not expend sufficient motivation to perform well on the test. This study introduced planned missingness as a potential solution. In planned missingness designs, data on all items are collected but each examinee only completes a subset of items, thus increasing data collection efficiency, reducing examinee burden, and potentially increasing data quality. The current scientific reasoning test served as the Long …


Addressing Serial-Order And Negative-Keying Effects: A Mixed-Methods Study, Jerusha J. Gerstner May 2015

Addressing Serial-Order And Negative-Keying Effects: A Mixed-Methods Study, Jerusha J. Gerstner

Dissertations, 2014-2019

Researchers have studied item serial-order effects on attitudinal instruments by considering how item-total correlations differ based on the item’s placement within a scale (e.g., Hamilton & Shuminsky, 1990). In addition, other researchers have focused on item negative-keying effects on attitudinal instruments (e.g., Marsh, 1996). Researchers consistently have found that negatively-keyed items relate to one another above and beyond their relationship to the construct intended to be measured. However, only one study (i.e., Bandalos & Coleman, 2012) investigated the combined effects of serial-order and negative-keying on attitudinal instruments. Their brief study found some improvements in fit when attitudinal items were presented …


Examining The Performance Of The Metropolis-Hastings Robbins-Monro Algorithm In The Estimation Of Multilevel Multidimensional Irt Models, Bozhidar M. Bashkov May 2015

Examining The Performance Of The Metropolis-Hastings Robbins-Monro Algorithm In The Estimation Of Multilevel Multidimensional Irt Models, Bozhidar M. Bashkov

Dissertations, 2014-2019

The purpose of this study was to review the challenges that exist in the estimation of complex (multidimensional) models applied to complex (multilevel) data and to examine the performance of the recently developed Metropolis-Hastings Robbins-Monro (MH-RM) algorithm (Cai, 2010a, 2010b), designed to overcome these challenges and implemented in both commercial and open-source software programs. Unlike other methods, which either rely on high-dimensional numerical integration or approximation of the entire multidimensional response surface, MH-RM makes use of Fisher’s Identity to employ stochastic imputation (i.e., data augmentation) via the Metropolis-Hastings sampler and then apply the stochastic approximation method of Robbins and Monro …


Extending An Irt Mixture Model To Detect Random Responders On Non-Cognitive Polytomously Scored Assessments, Mandalyn R. Swanson May 2015

Extending An Irt Mixture Model To Detect Random Responders On Non-Cognitive Polytomously Scored Assessments, Mandalyn R. Swanson

Dissertations, 2014-2019

This study represents an attempt to distinguish two classes of examinees – random responders and valid responders – on non-cognitive assessments in low-stakes testing. The majority of existing literature regarding the detection of random responders in low-stakes settings exists in regard to cognitive tests that are dichotomously scored. However, evidence suggests that random responding occurs on non-cognitive assessments, and as with cognitive measures, the data derived from such measures are used to inform practice. Thus, a threat to test score validity exists if examinees’ response selections do not accurately reflect their underlying level on the construct being assessed. As with …