No test is 100% accurate. There, I said it. But before we throw the baby out with the bath water and abandon testing altogether, let’s look at some realistic assumptions about tests and their value in making hiring and placement decisions. Tests, by their very nature, should be considered samples of performance and signs of future performance. A well designed test, used for the purpose it was intended, provides a lot of valuable information about a job candidate. Tests are designed to increase your odds of making a good decision, and avoiding a bad decision. If a test tells you that a candidate has a high propensity for taking risks does that mean that he/she will definitely get into an accident on the job? No. Remember my first sentence. But, do you want to put a person who is likely to take risks in a dangerous job? Wouldn’t you want to know about their likelihood for taking risks before you send them out with blasting caps and flammable liquids? I would hope so.
I bring this up because too often people have unrealistic assumptions about what tests are supposed to do. The conversation usually goes something like this. “We had someone take the test and they didn’t do well on it, but they’re a good performer, so I guess the test doesn’t work.” That may in fact be the case. Perhaps the test isn’t a good predictor of performance in that particular job. That’s one reasonable conclusion. Here are a few more. Maybe the measure of performance isn’t that great. When we look at “hard” criteria, like actual sales, for a salesperson compared with “soft” criteria, like their supervisor’s rating of job performance, you’d be surprised how many times the test more accurately predicts both the hard and the soft performance criteria than either of the two criteria predict each other. Outside of applying the wrong test to the job, the most likely reason that a test didn’t “work” for a given individual is that tests are designed to improve your odds, over time, over lots of people.
Here’s an analogy. You watch two hitters for one at bat each in the same baseball game. One batter hits a double and the other one strikes out. Which one is the better hitter? Which one do you want on your team? Ridiculous question, right? You’d never make a decision based on that limited bit of information. Wouldn’t it be helpful to know that the batter who struck out in that one at bat is a perennial All Star, with a .315 batting average and the one who got the hit is a journeyman who has never hit above .230 in his career? Feel more comfortable about your decision knowing that information? Knowing the “behind the scenes” information is exactly what a good test can provide. It’s never good to make decisions on very small samples. Trying out a test based on one or two people is just like evaluating a hitter based on one at bat. You need the power of Big Data (i.e., more objective data) to bolster your decision making.
“Perfection is not attainable,
but if we chase perfection
we can catch excellence.”
– Vince Lombardi
So, if a test, an admittedly imperfect one, is able to increase your odds of making a good decision and avoiding a bad one, by say 30%, is it worth using? If you’re hiring a salesperson, an executive or a factory worker, or almost anyone for that matter, the answer is almost always yes. It won’t always work and it will sometimes be wrong. But the goal is to improve accuracy and reduce risk, not to be perfect.