Companies use tests as part of their pre-employment selection process for a number of reasons. One of them is to efficiently screen a large number of candidates into a more manageable number. Another reason is to accurately and fairly identify individuals who are more likely to be successful on the job. The use of accurate tools in the selection process can significantly improve a company’s chances of selecting the right people.
Most studies that look at the return on investment (ROI) for improved selection processes show that the cost of more accurate screening tools is almost atrivial expense when compared to the return in terms of hiring better people who are less likely to turnover. For instance, if using a more accurate test would help you hire a salesperson who sold $100,000 more every year than another person, wouldn’t you be willing to spend $1,000 for that test? Put that way, the answer is obvious. The problem is the situation is never that simple, or at least it doesn’t seem that simple. There are two things working against us when we make decisions about comparing tests and selection systems.
Test Comparison Roadblocks
The first is that it’s difficult to know whether one test is better than another. A personality test is a personality test, right? Not really. Some personality tests are better constructed, more reliable and also more accurate than others. Some are designed for diagnosing clinical disorders and are normed on clinical samples. Some are more focused on business-related uses and focus on factors that are more related to work. Not all cognitive ability tests are the same, even though there does tend to be a good deal of overlap in general intelligence, i.e., smart people tend to do better on all cognitive ability tests. However, it’s probably more important for you to know if an executive is good at critical thinking than at remembering how the 90 degree triangle works. It’s difficult to know which tests to use in a particular situation. That’s why there are professionals, specifically industrial/organizational psychologists who specialize in evaluating tests, the job and then linking the two.
The second challenge that we face is that we are fundamentally built to try and maximize immediate returns and reduce risks. So, as neurobiologists consistently show, most people when offered $100 now or $120 in a month will take the $100 now. It’s essentially the old adage of the “bird in the hand vs. two in the bush.”
Thus, if someone offers you a test for hiring salespeople for free vs. spending $1,000, it’s human nature to lean towards the free option. Even if you were told that the $1,000 test would lead to an additional $100K in sales starting in year two you would have a hard time (a) believing that to be true and (b) paying an extra $1K right now for a potential return, even a large one, in the future.
The problem with this, of course, is that it doesn’t necessarily lead to the best decision. For instance, consider the case of using a free test to select hourly employees. Free tests are often offered to employers free of charge through state and local agencies. Some free tests are not necessarily a bad test, per se. At the same time, they tend to have higher levels of adverse impact against protected groups than alternatives that are not necessarily “free.” Consider a recent OFCCP case related to an organization’s use of free tests. They found that otherwise qualified minority candidates had a 49% pass rate compared with 72% of non-minority candidates. That resulted in a 68% comparative pass ratio. The EEOC sets 80% as the standard for adverse impact, meaning that anything below 80% is considered to be prima facie evidence of adverse impact. This provides a green light for minority groups to file a class action lawsuit which requires the company to defend the job-relatedness of its selection process.
Remove Adverse Impact
Compare this to the Select Assessment® for Manufacturing (SAM), which is also used for hourly selection. Based on a review, six different applicant samples covering over 38,000 candidates, SAM had a comparative pass ratio that was consistently above 80% (the lowest CPR in that sample was 82%). In short, even though SAM demonstrates higher predictive validity (i.e., accuracy) than some free tests, it does this without any adverse impact.
What does this mean from a monetary standpoint? Based on a review of recent court decisions, the average out-of- court settlement for EEOC cases was $590,266 and $668,785 for OFCCP cases. Those numbers actually pale in comparison to the $13 million average for cases that went to trial and were found to be in favor of the plaintiff. Free can be very expensive indeed.