Open Access. Powered by Scholars. Published by Universities.®

Social and Behavioral Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 11 of 11

Full-Text Articles in Social and Behavioral Sciences

A Latent Class Analysis Of Personality Traits With Educational Attainment, Tyler Minter Aug 2022

A Latent Class Analysis Of Personality Traits With Educational Attainment, Tyler Minter

College of Education and Human Sciences: Dissertations, Theses, and Student Research

The five-factor model of personality (extraversion, agreeableness, conscientiousness, neuroticism, openness to experience) is an empirically based personality model that has been utilized in multiple psychological assessments. Recent works have found Block & Block’s (1980) three personality profiles (resilient, overcontrolled, undercontrolled) within the context of the five-factor model. This study performed a latent class analysis using a short FFM assessment from the SAPA project, a free online personality test. The intention of this study was to replicate the three personality profiles within the five-factor model. Four latent classes were included in the final solution. Two of the three personality profiles emerged …


Statistical Mediation Analysis In Regression Discontinuity Design For Causal Inference, Donna Chen Dec 2021

Statistical Mediation Analysis In Regression Discontinuity Design For Causal Inference, Donna Chen

College of Education and Human Sciences: Dissertations, Theses, and Student Research

Regression discontinuity designs (RDDs) are the most robust quasi-experimental design, but current statistical models are limited to estimates for the simple causal relationship between only two variables: the independent and dependent variables. In practice, intervening variables (or mediators) are often observed as part of the causal chain. Mediators explain the why and how a treatment or intervention works. Therefore, mediation and RDD analysis combined can be a useful tool in identifying key components or processes that make intervention programs effective while making causal inferences for improving student achievement, despite natural constraints, limitations, and ethical considerations. Without an integrated framework of …


Investigating The Fit Of The Generalized Graded Unfolding Model (Ggum) When Calibrated To Irt Generated Data From Dominance And Ideal Point Models, Abdulla Alzarouni Jul 2021

Investigating The Fit Of The Generalized Graded Unfolding Model (Ggum) When Calibrated To Irt Generated Data From Dominance And Ideal Point Models, Abdulla Alzarouni

College of Education and Human Sciences: Dissertations, Theses, and Student Research

The assessment of model fit in latent trait modelling, better known as item response theory (IRT), is an integral part of model testing if one is to make valid inferences about the estimated parameters and their properties based on the selected IRT model. Though important, the assessment of model fit has been less utilized in IRT research than it should. For example, there have been less research investigating fit for polytomous dominance models such the Graded Response Model (GRM), and to a lesser extent ideal point models such as the Generalized Graded Unfolding Models (GGUM), both in its dichotomous and …


Evaluation Of Modern Missing Data Handling Methods For Coefficient Alpha, Katerina Matysova Dec 2019

Evaluation Of Modern Missing Data Handling Methods For Coefficient Alpha, Katerina Matysova

College of Education and Human Sciences: Dissertations, Theses, and Student Research

When assessing a certain characteristic or trait using a multiple item measure, quality of that measure can be assessed by examining the reliability. To avoid multiple time points, reliability can be represented by internal consistency, which is most commonly calculated using Cronbach’s coefficient alpha. Almost every time human participants are involved in research, there is missing data involved. Missing data means that even though complete data were expected to be collected, some data are missing. Missing data can follow different patterns as well as be the result of different mechanisms. One traditional way to deal with missing data is listwise …


Using Bayesian Multilevel Models To Control For Multiplicity Among Means, Michael J. Zweifel Nov 2018

Using Bayesian Multilevel Models To Control For Multiplicity Among Means, Michael J. Zweifel

College of Education and Human Sciences: Dissertations, Theses, and Student Research

It is well known that the Type I error rate will exceed α when multiple hypothesis tests are conducted simultaneously. This is known as Type I error inflation. The probability of committing a Type I error grows monotonically as the number as the number of hypothesis being tested increases. A class of methods, known as multiple comparison procedures, has been developed to combat this issue. However, in turn for maintaining the Type I error rate below α, multiple comparison procedures sacrifice power to correctly reject false hypotheses. The loss of power is exacerbated when variance heterogeneity is present.

In …


A Comparison Of Alternative Bias-Corrections In The Bias-Corrected Bootstrap Test Of Mediation, Donna Chen Jul 2018

A Comparison Of Alternative Bias-Corrections In The Bias-Corrected Bootstrap Test Of Mediation, Donna Chen

College of Education and Human Sciences: Dissertations, Theses, and Student Research

Although the bias-corrected (BC) bootstrap is an oft recommended method for obtaining more powerful confidence intervals in mediation analysis, it has also been found to have elevated Type I error rates in conditions with small sample sizes. Given that the BC bootstrap is used most often in studies with low power due to small sample size, the focus of this study is to consider alternative measures of bias that will reduce the elevated Type I error rate without reducing power. The alternatives examined fall under two categories: bias correction and transformation. Although the bias correction methods did not significantly decrease …


An Evaluation And Revision Of The Children’S Behavior Questionnaire Effortful Control Scales, Scott R. Frohn Jun 2017

An Evaluation And Revision Of The Children’S Behavior Questionnaire Effortful Control Scales, Scott R. Frohn

College of Education and Human Sciences: Dissertations, Theses, and Student Research

The Children’s Behavior Questionnaire (CBQ; Rothbart, Ahadi, Hershey, & Fisher, 2001) is a popular parent report measure of children’s temperament. Effortful control, which refers to processes involved in regulating reactivity to internal and external stimuli, is one factor of temperament measured by the CBQ using five scales tapping multiple dimensions. Numerous studies examining the psychometric properties of the CBQ have shown some problems with the scales, including inconsistent factor structures and measurement noninvariance. Furthermore, the way effortful control is typically defined in the literature, and even according to the CBQ’s authors, is inconsistent with how it is actually measured with …


The Effects Of Scaling On Trends Of Development: Classical Test Theory And Item Response Theory, Weldon Z. Smith Apr 2016

The Effects Of Scaling On Trends Of Development: Classical Test Theory And Item Response Theory, Weldon Z. Smith

College of Education and Human Sciences: Dissertations, Theses, and Student Research

The scale metrics used in educational testing are often arbitrary, and this can impact interpretation of scores on measurements. Both classical test theory sum scores and item response theory estimates measure the same underlying dimension, but differences in the two scales may lead one to be more preferential than the other in interpreting data. Mismatch between individual ability and test difficulty can further result in difficulties in correctly interpreting trends of development in longitudinal data. A previous limited simulation by Embretson (2007) demonstrated that classical test theory sum scores result in misinterpretation of linear trends of development, and that item …


A Comparison Of Population-Averaged And Cluster-Specific Approaches In The Context Of Unequal Probabilities Of Selection, Natalie A. Koziol May 2015

A Comparison Of Population-Averaged And Cluster-Specific Approaches In The Context Of Unequal Probabilities Of Selection, Natalie A. Koziol

College of Education and Human Sciences: Dissertations, Theses, and Student Research

Sampling designs of large-scale, federally funded studies are typically complex, involving multiple design features (e.g., clustering, unequal probabilities of selection). Researchers must account for these features in order to obtain unbiased point estimators and make valid inferences about population parameters. Single-level (i.e., population-averaged) and multilevel (i.e., cluster-specific) methods provide two alternatives for modeling clustered data. Single-level methods rely on the use of adjusted variance estimators to account for dependency due to clustering, whereas multilevel methods incorporate the dependency into the specification of the model.

Although the literature comparing single-level and multilevel approaches is vast, comparisons have been limited to the …


A Micro-Level Analysis Of Behavioral Dynamics In Parent-Child Synchrony, Kadie L. Ausherman Aug 2014

A Micro-Level Analysis Of Behavioral Dynamics In Parent-Child Synchrony, Kadie L. Ausherman

College of Education and Human Sciences: Dissertations, Theses, and Student Research

This study investigates parent-child synchrony, a multilevel construct that has not been operationalized in a precise or standardized way. Synchrony is frequently discussed theoretically, yet there still lacks a clear means of measuring it, even on the behavioral level. When parent-child synchrony is operationalized in a study, it is rarely analyzed in such a way that reflects the dyadic dynamics that unfold as the parent and child are interacting. The aim of this study is to operationalize parent-child synchrony in terms of the dyadic behavior patterns. An overview of the current literature with regard to synchrony as a multilevel construct …


Improving Irt Parameter Estimates With Small Sample Sizes: Evaluating The Efficacy Of A New Data Augmentation Technique, Brett P. Foley Jul 2010

Improving Irt Parameter Estimates With Small Sample Sizes: Evaluating The Efficacy Of A New Data Augmentation Technique, Brett P. Foley

College of Education and Human Sciences: Dissertations, Theses, and Student Research

The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using several variations of DupER Augmentation (based on different imputation methodologies, deletion rates, and duplication rates), analyzed in BILOG-MG 3, and results are compared to those obtained from analyzing the raw data. Additional manipulated variables include test length and sample size. Estimates are compared using seven different …