Open Access. Powered by Scholars. Published by Universities.®

Social and Behavioral Sciences Commons

Open Access. Powered by Scholars. Published by Universities.®

Bowling Green State University

Journal

2021

Validity

Articles 1 - 2 of 2

Full-Text Articles in Social and Behavioral Sciences

Applicant Faking On Personality Tests: Good Or Bad And Why Should We Care?, Robert P. Tett, Daniel V. Simonet May 2021

Applicant Faking On Personality Tests: Good Or Bad And Why Should We Care?, Robert P. Tett, Daniel V. Simonet

Personnel Assessment and Decisions

The unitarian understanding of construct validity holds that deliberate response distortion in completing self-report personality tests (i.e., faking) threatens trait-based inferences drawn from test scores. This “faking-is-bad” (FIB) perspective is being challenged by an emerging “faking-is-good” (FIG) position that condones or favors faking and its underlying attributes (e.g., social skill, ATIC) to the degree they contribute to predictor–criterion correlations and are job relevant. Based on the unitarian model of validity and relevant empirical evidence, we argue the FIG perspective is psychometrically flawed and counterproductive to personality-based selection targeting trait-based fit. Carrying forward both positions leads to variously dark futures for …


Faking And The Validity Of Personality Tests: An Experimental Investigation Using Modern Forced Choice Measures, Christopher R. Huber, Nathan R. Kuncel, Katie B. Huber, Anthony S. Boyce May 2021

Faking And The Validity Of Personality Tests: An Experimental Investigation Using Modern Forced Choice Measures, Christopher R. Huber, Nathan R. Kuncel, Katie B. Huber, Anthony S. Boyce

Personnel Assessment and Decisions

Despite the established validity of personality measures for personnel selection, their susceptibility to faking has been a persistent concern. However, the lack of studies that combine generalizability with experimental control makes it difficult to determine the effects of applicant faking. This study addressed this deficit in two ways. First, we compared a subtle incentive to fake with the explicit “fake-good” instructions used in most faking experiments. Second, we compared standard Likert scales to multidimensional forced choice (MFC) scales designed to resist deception, including more and less fakable versions of the same MFC inventory. MFC scales substantially reduced motivated score elevation …