Test developers have a lot to worry about when building an effective assessment tool. They need to make sure the test is measuring the right capabilities or characteristics that determine success in the target job. They need to make sure the test has consistency in how it measures those capabilities. And ultimately, they want to be able to demonstrate that the test actually predicts success by showing that higher scores on the test are associated with better job performance.
These are issues related to the validity and reliability of a test, and from a psychometric perspective, they are the best metrics for determining how effectively tests are performing. However, when considering whether to begin using a particular test in their organizations, there are a number of other, less scientific considerations that are really important to many stakeholders. For these stakeholders, a test with good psychometric properties may just be the price of entry.
One such consideration is how candidates or employees react to the test questions. Many organizational leaders are highly concerned with a candidate’s or employee’s experience while taking the assessment because they realize it can shape perceptions of the organization. This is why face validity – or whether a test appears to be measuring things related to the target job – can sometimes be a deal breaker.
From a psychometric perspective, the most predictive test questions are not always the ones that appear to be most relevant to the target job. Many personality tests, for example, contain items that may appear to be personal in nature, and often include descriptions of behaviors that occur outside of the workplace. Nevertheless, these items may be measuring traits such as dependability or positive attitude that are actually quite predictive of success is certain jobs. So the challenge for test developers, if they want to make the test appeal to a broader audience, is to create test questions that are both predictive of job success and also appear to be relevant to the job. This is not always easy to accomplish.
Another issue that affects the marketability of an assessment is test length. Organizations vary considerably in the maximum test length they will tolerate. Science-based organizations that may pride themselves on hiring the smartest people, for example, will often be fine with longer tests because they understand they need to be very rigorous with their candidate evaluation process. However, there are many organizations that while understanding the value of using assessments, may still be less comfortable with using them. They may want to keep testing time to a minimum and ensure that if they introduce a new test that it has minimal impact on the existing staffing process.
Ease of administration
This is yet another area that assessment stakeholders are finding to be more and more important to them. Not only are assessments delivered through the internet ubiquitous today, but more organizations are using unproctored assessments as well. Innovations in test security, together with a generally higher comfort level with moving to unproctored testing, is resulting a significant increase in organizations adopting this format. With the reduced cost and administrative burden of unproctored testing via the internet, not to mention test results being made available instantly, this trend is likely to continue.
Finally, alignment of test results to an organization’s competency model is another demand coming from more and more organizations. With so many companies wanting to build an end-to-end talent management process that encompasses such components as pre-hire screening, onboarding, career and succession planning, and performance management, they are looking to use a single competency model to connect and align these various talent management practices.
For test developers, this means they need to be flexible in how they label the capabilities measured in their assessment tools. A competency mapping may need to take place when introducing a new test into a company with an existing competency model, and modifications to the reports may need to be made in order to ensure the results are framed and communicated in a way that is aligned to the company’s language and culture.
In sum, reliability and predictive validity of assessments are not always the only factors that determine whether organizations will ultimately adopt them. Even when assessments have good psychometric properties, stakeholders will often look for other, less scientific factors that they believe will make or break the success of a new assessment program. Four such factors discussed above are face validity, test length, ease of administration, and alignment to a company’s competency model.