In an ever-advancing, technologically-savvy business world, something is for certain: the role of mobile devices is increasing. Recently Ericsson reported that 91% of the world’s population has a mobile subscription. No less than 4.5 billion people have mobile device plans. This number has increased by more than 100 million people in the last six months and the growth shows no sign of slowing.
Whether we like it or not, the reality is that if you use un-proctored testing, chances are you have candidates taking your assessments on mobile devices already (or, at least, trying to). Are you ready for this? Can your assessment technology handle the influx of mobile devices? Even if your assessments “run” on tablets, is the measurement equivalent? There are a lot of important considerations to think through as our candidate pools move increasingly towards testing on mobile devices. To prepare for this (and to protect your business), here are 3 things that your human resources team needs to consider to determine whether or not your pre-employment assessments are “mobile-ready”.
Assessment results should correlate with the individual, not the device. This is a reliability issue. The results of your assessment should correlate with the person taking the assessment, not the device on which it is being taken. For example, tablets generally react slower than PCs (it takes a split second longer to click a radio button on a tablet than it does on a PC). However, if an assessment includes a measure of processing speed that records data to the millisecond, the results will correlate with the device – in other words, performance on the test is influenced by the device used to take it, as opposed to the individual candidate’s behaviors.
The ultimate goal is to find methods of measurement that are device agnostic – that is, capture data about candidate attributes reliably across all devices.
Your assessment should have equivalent “construct validity” across devices. In other words, are you actually measuring the exact same competency when candidates take the assessment across different devices? For instance, let’s look again at an assessment that is measuring processing speed. An assessment taken on a PC may require candidates to use a mouse to complete tasks. The same assessment, however, administered on a tablet may not measure processing speed as much as it measures a candidate’s ability to effectively use a tablet by figuring out how to “make it work.” This person’s score may be high or low, but the score is based more on the candidate’s level of knowledge using a tablet; not based on his or her processing speed.
Make sure the assessment is fair across devices. It is a recruiting best practice to recruit for diversity. Research has suggested that conducting assessments on mobile devices can help organizations process a more diverse candidate pool. Candidates in some protected classes appear to access mobile devices to complete assessments more readily than others. At face value, this is a good thing – it allows HR to reach more qualified and diverse candidates. BUT, what if assessments delivered on one device have a disproportionate number of people failing from a protected class relative to assessments delivered on other devices? This is an important concern to think through as your candidate pool becomes “more mobile.” If you recruit a more diverse candidate pool by supporting a broader range of devices, but then fail protected class members at a greater rate due to the device, it could have disastrous outcomes for the organization and the candidate pool.
While research for taking assessments on mobile devices is still in its early stages, these are some possibilities that need to be explored. Fortunately, PSI has this data and we are actively pursuing these and other related research questions about mobile devices. There is a lot to learn and it is an exciting area of research. Stay tuned on this blog, as we will certainly continue to post topics about mobile device testing in the future.