Open Access. Powered by Scholars. Published by Universities.®

Quantitative Psychology Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 4 of 4

Full-Text Articles in Quantitative Psychology

A Novel Examination Of None-Of-The-Above As It Influences Examinee Item Responses, Kathryn N. Thompson May 2023

A Novel Examination Of None-Of-The-Above As It Influences Examinee Item Responses, Kathryn N. Thompson

Dissertations, 2020-current

It is imperative to collect validity evidence prior to interpreting and using test scores. During the process of collecting validity evidence, test developers should consider whether test scores are contaminated by sources of extraneous information. This is referred to as construct irrelevant variance, or the “degree to which test scores are affected by processes that are extraneous to the test’s intended purpose” (AERA et al., 2014, p. 12). One possible source of construct irrelevant variance is violating item-writing guidelines, such as to “avoid the use of none-of-the-above” in multiple-choice items (Rodriguez, 2016, p. 268).

Numerous studies have been conducted with …


The Use Of Complex-Structure Items In Multistage Testing, Paulius Satkus May 2022

The Use Of Complex-Structure Items In Multistage Testing, Paulius Satkus

Dissertations, 2020-current

When developing tests, measurement experts may prefer simple-structure items because they measure one trait, which simplifies scoring and scoring interpretation. Conversely, complex-structure items may be preferred to reflect the complexity of multidimensional constructs. The current study sought to address the gap in the literature of multi-stage testing by conducting a simulation study with a hypothetical two-stage adaptive test with a purpose of comparing the performance of simple and complex structure items. The findings suggest that with a longer test (60 items), the two types of items performed similarly with respect to bias and RMSE of the trait estimates. For the …


Getting Caught-Up In The Process: Does It Really Matter?, Nikole Gregg May 2021

Getting Caught-Up In The Process: Does It Really Matter?, Nikole Gregg

Dissertations, 2020-current

Likert items are the most commonly used item-type for measuring attitudes and beliefs. However, responses from Likert items are often plagued with construct-irrelevant variance due to response style behavior. In other words, variability from Likert-item scores can be parsed into: 1) variance pertinent to the construct or trait of interest, and 2) variance irrelevant to the construct or trait of interest. Multidimensional Item Response Theory (MIRT) is an increasingly common modeling approach to parse out information regarding the response style traits and the trait of interest. These MIRT approaches are categorized into threshold-based approaches and response process approaches. An increasingly …


Does Coding Method Matter? An Examination Of Propensity Score Methods When The Treatment Group Is Larger Than The Comparison Group, Beth A. Perkins May 2021

Does Coding Method Matter? An Examination Of Propensity Score Methods When The Treatment Group Is Larger Than The Comparison Group, Beth A. Perkins

Dissertations, 2020-current

In educational contexts, students often self-select into specific interventions (e.g., courses, majors, extracurricular programming). When students self-select into an intervention, systematic group differences may impact the validity of inferences made regarding the effect of the intervention. Propensity score methods are commonly used to reduce selection bias in estimates of treatment effects. In educational contexts, often a larger number of students receive a treatment than not. However, recommendations regarding the application of propensity score methods when the treatment group is larger than the comparison group have not been empirically examined. The current study examined the recommendation to recode the treatment and …