Meta-analyses have overestimated both the primacy of cognitive ability and the validity of a wide range of predictors within the personnel selection arena, according to groundbreaking research led by Paul R. Sackett, Ph.D., chair of the HumRRO Board of Trustees, and the Beverly and Richard Fink Distinguished Professor of Psychology and Liberal Arts at the University of Minnesota.

In a world embracing simplicity and certainty, researchers often take great pains to emphasize the tentative nature of their conclusions—well-captured by the phrase, “Statistics means never having to say you’re certain,” and the shopworn joke about psychologists responding to all questions with, “It depends.”

Even so, there must be some things researchers assert confidently, right? Some unassailable, unimpeachable principles strong enough to build decades of research on?

Within the field of industrial-organizational (I-O) psychology, there has been at least one such fundamental truth: cognitive ability is the best predictor of work performance. Rooted in numerous meta-analyses and confidently proclaimed for over half a century, decades of research and hiring and promotion methods have been built on this proclamation.

Thus, it would take a giant in I-O psychology like Sackett to thoughtfully and rigorously revisit the statistical corrections that lie at the heart of meta-analytic methods and challenge 50 years of research. In doing so, Sackett’s work, among other intriguing findings, revealed that structured interviews may in fact be the strongest predictor of job performance—not cognitive ability.

“I view this as the most important paper of my career,” Sackett said, noting that it offers a “course correction” to the I-O field’s cumulative knowledge about the validity of personnel selection assessments. This consequential paper, “Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range,” recently released as an advance online publication by the Journal of Applied Psychology, is co-authored by Charlene Zhang, Ph.D., from the University of Minnesota, Christopher Berry, Ph.D., from Indiana University, and Filip Lievens, Ph.D., from Singapore Management University. Berry received HumRRO’s Meredith P. Crawford Fellowship in 2006.

Correcting the Corrections

The critique levied by Sackett and his co-authors aims directly at the “nuts and bolts” of meta-analytic methodology, so a brief review of those methods helps one fully appreciate the nature and importance of their contributions. As the most common approach to synthesizing research findings across studies, meta-analyses typically involve the following steps:

Focusing on Step 3, Sackett and his colleagues argue that commonly used corrections systematically inflate relations among personnel selection assessments and job performance. They are particularly critical of one widespread practice that involves using range restriction estimates generated from predictive validation studies to correct the full set of studies included in a meta-analysis that also includes many adopting concurrent designs.

The two shouldn’t be treated in like fashion. While predictive validation designs include actual job applicants who are hired on the basis of the assessment, concurrent validation designs involve administering the same assessment to current employees. Because current employees were not selected based on the assessment administered in concurrent studies, Sackett and his colleagues convincingly argue that “across the board” corrections overinflate validity estimates—sometimes to a substantial degree.

Future Research Implications

The nuanced and thoughtful critiques of meta-analytic corrections shared by Sackett and his colleagues extend beyond the example noted above, yet they all reflect a set of guiding principles that future meta-analytic work would be wise to follow:

Practical Take-Aways

Using these principles as a guide, Sackett and his colleagues re-analyzed studies included in earlier meta-analyses along with more recently conducted research, and the outcomes of their work hold many lessons for I-O researchers and practitioners alike:

  • Structured interviews emerged as the strongest predictors of job performance. Sackett and his colleagues offer that this finding “suggests a reframing: while Schmidt and Hunter (1998) positioned cognitive ability as the focal predictor, with others evaluated in terms of their incremental validity over cognitive ability, one might propose structured interviews as the focal predictor against which others are evaluated.”
  • Structured interview validities are somewhat variable. While structured interviews had the highest mean operational validity (r = .42), they also showed a relatively high degree of spread around that mean. Particularly given the wide range of constructs targeted by structured interviews, not to mention the advent of digital interviewing and AI-based interview scoring, this finding is a compelling call for researchers to identify the factors responsible for this variation and the approaches to developing, administering, and scoring structured interviews that foster strong validities.
  • Job-specific assessments fared quite well. Along with structured interviews, several other job-specific assessments—including job knowledge tests, empirically-keyed biodata, and work sample tests—appeared among the top five strongest predictors of job performance (with validities of .40, .38, and .33, respectively). Cognitive ability rounded out this list with a validity estimate of .31.
  • Interests should be measured via the synergies among personal interests and the interest profile of a specific job. Compared to earlier work, the operational validity of interests increased from .10 to .24—a boost due to Sackett and his colleagues defining interests in a fit-based (i.e., between personal interests and unique job demands) rather than a general way (i.e., the relation between a general type of interest, such as artistic or investigative, and overall job performance).
  • Tailoring personality items to the job context increases their predictive validity. In fact, the validities were so much stronger for contextualized personality assessments (i.e., adding “at work” to each item or asking applicants to respond in terms of how they behave at work) that Sackett and his colleagues suggest viewing them as essentially a different type of assessment relative to more general personality inventories.

“This work is another example of the rigor, thoughtfulness, and impact that always characterizes Paul’s work, not to mention the direct relevance for applied practice,” said Cheryl Paullin, Ph.D., Vice President of Operations at HumRRO. “I have no doubt it will have a substantial impact on the I-O field in the coming years. The findings are also consistent with my experience that well-crafted structured interviews, grounded in detailed job analytic data and conducted by well-trained interviewers, are one of the best personnel selection tools we have to offer our clients.”

Gavan O'Shea - Manager, Business Development

For more information, contact:

Gavan O’Shea, Ph.D.

Manager