We often receive questions about the science behind our employee assessments. How do we develop them? What research supports their validity? I recently had a Q&A with a client to explain the process we use to validate each of our assessments:
I work in human resources. We just received a validation report from our employee assessment vendor showing evidence to support a new leadership assessment for our supervisors. What does this number represent? How do we know if the coefficient is good or bad?
We can explain the validation report and what it means for assessments by looking at validity coefficients, correlation coefficients, and explaining how validation studies work. We’ll start with validity coefficients.
Validity coefficients can be very useful pieces of information. A validity coefficient is a statistical correlation showing the relationship between two variables. In the case of an assessment, the two variables are usually (1) assessment score and (2) an important organizational outcome, like job performance.
A correlation coefficient has a direction (positive or negative) and a magnitude (0 to 1.0). When two variables move in the same direction (as one increases, the other increases) the variables have a positive correlation. A common example of a positive correlation is outside temperature and ice cream sales. As it gets warmer outside, stores tend to sell more ice cream. If one variable increases while the other variable decreases, the variables have a negative correlation. An example of this is likely outside temperature and heavy coat sales. As it gets warmer outside, stores tend to sell fewer coats. The magnitude of the correlation indicates the strength of the relationship between the variables. A zero magnitude means that the two variables have no relationship while a 1.0 correlation means that the two variables are perfectly related to one another. You can have a perfect positive correlation or a perfect negative correlation, both are equally as strong in magnitude.
So, how do you know if the coefficient you have is good or bad?
When it comes to psychological assessments and predicting human behavior, you will never see correlations close to 1.0. If you see a correlation around .30, that shows a moderately strong relationship between the assessment and the outcome variable. You can be pretty happy with that range. Correlations approaching .50 or .60 are very strong predictors of performance. If the validity coefficient in your report is in that range, you should be very confident that the assessment is helping to identify your best leaders.
At the risk of presenting too much technical information [don’t worry, if you love this stuff, we have plenty of studies that you can peruse, here], there are some other factors that can affect the confidence you can have in the validity coefficient you are seeing. A reliable and accurate validity coefficient should come from a well-designed validation study. Typically, larger samples (greater than 100) are needed to conduct a validation study. The individuals in the sample should represent the group of people about which you are trying to draw conclusions. For example, if the validation report you are reviewing contains a sample of 100 or more front line supervisors from your organization with both assessment scores and performance ratings, then you can feel comfortable that you have a large enough sample of individuals who represent the position of interest. There are other design issues that can affect validity coefficients (e.g., measurement error, range restriction, unreliability) but I won’t go into detail on these issues here.
When you have adequate validity for an assessment with a specific target position within your organization, it means that you have established a statistical relationship between the assessment and an important organizational outcome (e.g., job performance). It provides strong evidence to support the assessment if it were challenged in court. All assessments should be backed by a job analysis, and a validation study is one extra step to support its use. Validation studies can also help organizations to maximize the use of the assessment by providing data to inform decisions around scoring and cut scores on the assessment.
An important point to keep in mind about validity is that it exists for that particular assessment and the position(s) for which the study was conducted. Just because the assessment predicts performance for your first line leaders doesn’t mean you can start using the same assessment for another position – unless you prove that the two positions similar in terms of job requirements and success factors (This is transporting validity and hits a different topic.).
How do I conduct a validation study?
Validation analyses should be conducted by trained statisticians who are familiar with psychological measurements. Typical analysts are Industrial/Organizational psychologists or other individuals with advanced training in the social sciences. You may want to inquire about who conducted the study if you have questions about the accuracy of the results in your report.
I hope your validation report describes a well-designed study and shows a moderate to strong relationship between the assessment and your organizational outcomes. If so, you’re in good shape to continue with the use of the assessment for that position. If not, you may want to inquire about other assessment options or consider other assessment vendors who might be able to do a better job of prediction.