Human Resources
Human Capital Consulting Blog


Nerdy Post Alert: Using Multidimensional IRT to Assess Ability


Loading

Jun 14, 2017 | by Kyle Morgan


Have you ever taken a test before getting a job?

Have you ever taken one that feels like it’s trying to figure out how smart you are?

That’s probably because it was.

Cognitive ability tests tend to be popular assessments for a range of professional positions and are one of the best predictors of on-the-job performance across all jobs. This is because cognitive abilities typically measure an individual’s ability to learn, retain, and utilize information. It’s easy to see why these skills would help an individual be successful in a wide range of occupations. The difficulty, however, with using tests of cognitive ability, is that they often demonstrate adverse impact; that is, there are differences in scores such that minority applicants consistently score lower on the tests than majority applicants. This difference in scores when used in a selection context could result in fewer minorities being selected for jobs (i.e., “Adverse Impact”), which could in turn result in lawsuits.

One promising avenue of research to eliminate adverse impact is to look at the specific cognitive abilities that make up a cognitive ability test individually rather than lumping them all together into one score (which is typically thought of as “intelligence”). One way to do this is to use a technique known as Multidimensional Item Response Theory (MIRT) to analyze and score these tests. 

Most of us are familiar with the traditional method of scoring a test; that is, the number of correct items divided by the total number of items (e.g., 8 out of 10 = 8/10 = 80%). This is certainly the easiest and most well-known method of scoring a test. However, there is another technique known as Item Response Theory (IRT) that is used in many educational and commercial environments, including such high-stakes testing as the SAT and GRE. IRT uses the difficulty level of an item in its calculation such that you essentially get more credit for a harder item than you would for an easy item. Incorporating such information into the questions allows us to design shorter, more accurate, and more secure tests. 

MIRT works the same as IRT, except with the assumption that there are many skills that underlie individual performance on these tests rather than simply one ability (the aforementioned “Intelligence”). Each ability, then, is given a separate score using this framework. Each of these scores can be compared to measures of job performance to see which dimension or dimensions of ability are predictive of performance. 

One study by an Aon Hewitt colleague demonstrated that by looking at these specific abilities, the predictive validity of cognitive tests was maintained and, in some circumstances, improved. More importantly, however, by assessing each specific ability rather than using one score for total ability, adverse impact was virtually eliminated. Since concerns about adverse impact is one of the greatest barriers to the implementation of cognitive ability measures, this finding could prove beneficial to the development and use of such tests in the future.

Aon Hewitt recently released a new assessment, built upon the principles of IRT, that actually exhibits little – to no – adverse impact. To learn more about the Global Adaptive Memory Evaluation (G.A.M.E.), visit our website.

 

About the Author