As an assessment consultant, I could go on and on about the value of using rigorous assessment tools for selecting better talent into an organization, as well as for developing employees as part of a talent management strategy. There are plenty of data to suggest that well designed assessment tools deliver a competitive edge and provide companies a very significant return on investment.
However, what often gets overlooked when implementing assessment tools is making hiring managers aware of some of the common situations in which they can be used improperly. Below are 3 common errors made when using assessment tools that highlight such situations, followed by a discussion with some specific examples.
- Judging the effectiveness of a test with bad data
- Using a test for selection that was not validated
- Using a test to make a decision for you, rather than owning the decision yourself
Judging the effectiveness of a test with bad data
Let’s take the first error. A very common example of judging a test based on poor data occurs when a handful of employees who are deemed top performers are asked to take an assessment as a litmus test of the tool. The expectation is that they will do very well on the test, and if they do not, then it is assumed it is not a good test. When some, or even all of them, do poorly, the decision is made to either not begin using the test more broadly, or to discontinue using it if it was already implemented.
The most salient problem with this scenario is that the effectiveness of a test can only be accurately evaluated with a validation study, which requires a much larger and more representative sample. Typically 100 or more candidates or employees are preferred if it is a selection test being used to screen job candidates. If the sample is of current employees, it is important to include individuals at all levels of performance and avoid hand-picking only the strong performers. This will yield a more representative sample that will produce more accurate results from the analysis.
In addition, collecting objective performance data such as manager ratings or performance reviews will be needed to conduct a proper validation study so that the assessment scores can be correlated with the performance data. Informal stakeholder perceptions do not provide good criteria on which to evaluate a test, much less allow for an analysis to indicate with certainty how well it is working.
Using a test for selection that was not validated for selection
The second error has to do with the fact that in a selection setting, there is much more at stake. Companies are at much greater legal risk when an assessment is used to inform a hiring decision than when one is used to inform an employee’s development plan. This is because candidates who had to take a test and are subsequently not hired could legally challenge the test.
As a result, a test used for selection needs to be more rigorous, and a job analysis must be conducted to gather the needed documentation indicating that the test is job related, or measures the most critical capabilities needed for success on the job. A follow up validation study, as described above, is also highly recommended for any test used in a selection setting when enough data become available.
Too often, assessments are implemented without any such documentation, often because the test has face validity, or simply appears to measure the right skills. Face validity is certainly not a bad thing, and in fact is often very effective in getting stakeholders, including job candidates, to view the testing process favorably. However, by itself it is not enough to justify using a test for selection.
Using a test to make a decision for you
The third error has to do with understanding the limits of the information that assessment tools provide, and that assessments results are data points that help inform decision making rather than “the answer” that makes that decision for you.
A typical example of this error is when a hiring manager decides they want to use an assessment tool – say a 360 survey - to evaluate one of their direct reports. When questioned about the goals of assessing this employee, the manager indicates he actually wants to exit the employee from the organization and use the assessment results as justification for doing so.
Besides the fact that this would be an inappropriate use of a 360 survey, the larger concern is that the manager has not been doing his job – managing the performance of this employee. Chances are this employee has not been put on a performance improvement plan nor been given any coaching or feedback on the areas in need of improvement. The bottom line here is that assessments can’t take the place of performance management.
Even in the case of using an assessment to screen external job candidates, the hiring decision should always be owned by hiring manager. Candidates who pass a screening assessment should be evaluated further by the staffing team, perhaps with a structured interview that could include probes informed by the assessment results.
The idea here is that candidates who pass a screening assessment can be deemed basically qualified for the position, but the staffing team, and the hiring manager in particular, should integrate the assessment results with the other data points gathered throughout the staffing process, both before and after the test, to make the most informed hiring decision possible. Again, it is people who make these decisions, not the assessment tools.
When leaders in organizations become interested in implementing assessment tools, it is often because they already have some idea of the value they can provide to the business. However, being aware the three common errors described above can not only help them avoid these kinds of situations, but also get the most return on investment from any assessment tool.