<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=353110511707231&amp;ev=PageView&amp;noscript=1">

Threats to Validity and How They Can Impact Your Assessment Process

October 27, 2020

When one hears the word "valid," there are several potential meanings that could be conveyed. That is because the term "valid" in everyday conversation can sometimes mean an argument that is sound (e.g., “She has a valid point.”). It can also be a legally binding term (e.g., "Your driver’s license is valid for 30 days."). In the context of employee selection, there are typically three types of validity we refer to:

  1. Content
  2. Construct
  3. Criterion-Related

Understanding these selection-focused forms of validity as well as the potential threats to the validity of assessment are critical to a sound hiring and assessment process. A great way to remember these threats is to use the acronym, Mr. Smith: 


The passage of time allows for candidates to potentially change. For simulation-based assessments, this could mean loss of coordination as an individual gets older. Conversely, as candidates age, they tend to have more advanced knowledge in specific areas, meaning their scores could be slightly impacted if completing a knowledge assessment.  

Regression to the mean.

Threats to Validity in your Assessment ProcessWhen candidates take an assessment multiple times, their scores tend to regress, or fall back, to their own personal average. Meaning, with repeated administrations (not too close in time) of an assessment, the likelihood of getting to a candidate’s true score increases. For assessments that are already built, you will just want to ensure the assessment has high reliability and then this will not be a concern.  

Selection of subjects.

Bias in the selection of candidates when building an assessment can have an impact on the results. For example, if you are trying to determine if a test is valid and has a good range of scores for a machine operator, but you give the assessment to only high-performing machine operators, it will appear as if your test does not have a wide range of scores because the subjects already know the answers.  When doing research on the appropriate assessments to use for your organization, you can combat this bias by asking the vendor if they have done meta-analyses on their assessments. Essentially, this means that multiple studies have been conducted which can decrease the impact of any bias in selection of subjects 

You Also Might Like: 3 Ways A Validation Study Makes Your Employee Selection Process Better

Participants drop out of the process, resulting in differences between groups that may be unrelated to their assessment scoresThis can impact selection-related processes such as validation studies. For example, if you are trying to determine if assessment scores predict performance, but all the top performers drop out of the study, it could look as if your assessment isn’t valid.  


Changes in measurement can result in score differences that are due to the measurement procedures rather than true changes in individuals’ performance abilityThis is why it is so important to keep your assessment process consistentSpecifically, the goal is to compare individuals in the hiring process and if you use different measures for different individuals, you cannot make a fair comparison.  


With repeated administrations of testing, results can change due to things like practice effects or knowledge gains. This is one reason why we always recommend that candidates don’t retake a selection assessment soon after taking it for the first time. There is a potential they will do better just because they have had practice, and it offers them an unfair advantage over other candidates.  


Events occurring outside of the study (e.g., a major societal event) could impact the results of a study. An example of this in a selection context could include a fire alarm going off in the assessment facility that may negatively impact a candidate score.  

It is important to keep the "Mr. Smith" threats to validity in mind to help you ask the right questions. However, if you choose the right selection vendor, your concerns will likely be alleviated quickly, as they know Mr. Smith quite well! 

volume hiring

Christa Bupp, PhD Christa Bupp, PhD is a Consultant based in the Pittsburgh office of PSI. She provides client support across many different industries with a primary focus on manufacturing. Christa is passionate about making clients happy through the implementation of various selection tools. She has expertise in the areas of research and development, assessment design, data analysis, and validation.