Open Access. Powered by Scholars. Published by Universities.®
Articles 1 - 3 of 3
Full-Text Articles in Other Psychology
Criterion-Related Validity Of Forced-Choice Personality Measures: A Cautionary Note Regarding Thurstonian Irt Versus Classical Test Theory Scoring, Peter A. Fisher, Chet Robie, Neil D. Christiansen, Andrew B. Speer, Leann Schneider
Criterion-Related Validity Of Forced-Choice Personality Measures: A Cautionary Note Regarding Thurstonian Irt Versus Classical Test Theory Scoring, Peter A. Fisher, Chet Robie, Neil D. Christiansen, Andrew B. Speer, Leann Schneider
Personnel Assessment and Decisions
This study examined criterion-related validity for job-related composites of forced-choice personality scores against job performance using both Thurstonian Item Response Theory (TIRT) and Classical Test Theory (CTT) scoring methods. Correlations were computed across 11 different samples that differed in job or role within a job. A meta-analysis of the correlations (k = 11 and N = 613) found a higher average corrected correlation for CTT (mean ρ = .38) than for TIRT (mean ρ = .00). Implications and directions for future research are discussed.
Special Issue - Call For Papers: Applications Of Judgment And Decision Making To Problems In Personnel Assessment, Edgar E. Kausel, Alexander T. Jackson
Special Issue - Call For Papers: Applications Of Judgment And Decision Making To Problems In Personnel Assessment, Edgar E. Kausel, Alexander T. Jackson
Personnel Assessment and Decisions
No abstract provided.
Creating Test Score Bands For Assessments Involving Ratings Using A Generalizability Theory Approach To Reliability Estimation, Charles Scherbaum, Marcus Dickson, Elliott Larson, Brian Bellenger, Kenneth Yusko, Harold Goldstein
Creating Test Score Bands For Assessments Involving Ratings Using A Generalizability Theory Approach To Reliability Estimation, Charles Scherbaum, Marcus Dickson, Elliott Larson, Brian Bellenger, Kenneth Yusko, Harold Goldstein
Personnel Assessment and Decisions
The selection of a method for estimating the reliability of ratings has considerable implications for the use of assessments in personnel selection. In particular, the accuracy of corrections to validity coefficients for unreliability and test score bands are completely dependent on the correct estimation of the reliability. In this paper, we discuss how generalizability theory can be used to estimate reliability for test score bands with assessments involving ratings. Using selection data from a municipal entity, we demonstrate the use of generalizability theory-based compare the implications of its use in test score banding compared to the traditional approach.