An article was recently published in Journal of Applied Social Psychology. The article, Social
Networking Websites, Personality Ratings, and the Organizational Context: More Than Meets the Eye? (Kluemper, Rosen, & Mossholder; 2012), is gaining some interest in the popular press and in other blogs. The researchers took an interesting look at whether or not raters could accurately assess personality based on social media (e.g. facebook pages). The authors reported relationships between these personality attributions from social media sources, self-report personality measures, cognitive ability measures, and supervisor ratings of performance. It is a well done piece with strong methods – all the things I look for in an academic paper.
As a selection researcher and practitioner, I spend my energy on a daily basis working towards primarily one goal . . . predicting job performance. Consequently, when a new method to do so is suggested, I am open and listening to see what I can learn. Based on my reading of this article I’m not sure I am going to start recommending this practice to my clients just yet. In particular, there are three considerations that would make me recommend a Select International Assessment over the social networking website personality rating approach:
- Social Media - Not everybody reports the same amount of content or with the same degree of detail on their social networking sites. For example, I have a LinkedIn profile, but I rarely use it. Peers of mine use their LinkedIn pages everyday and have much more content about themselves than I do posted online. If someone were to make attributions about my personality based on my social network information, they would have tremendously less data than they would have when evaluating my peers. How would the recruiter handle these cases where adequate data is not available?
- Assessments – Every candidate has the same experience. They are asked the same questions. Items are presented in the same way. Everything is the same. When two people have different scores we can feel confident that those people were measured against the same yardstick.
Assessments are the clear winner here.
- Social Media – This new approach relies on imperfect human judgment. Research has established that raters do not typically agree about the people they are rating. Further, research on application reviews (which collect information in a much more consistent manner than social network sites) shows us that recruiters tend to disagree considerably about who is a good candidate. The likelihood of recruiters being able to agree about personality attributions made this way is very small.
- Assessments – No such problem exists. Job related inferences are drawn from data based on empirically derived algorithms. There are no rater biases. Well developed assessments consistently measure what they were intended to measure. The same cannot be said for many situations involving ratings from people.
Again, assessments are the clear winner.
3. Predictive Validity
- Social Media – The verdict is out on this topic. What we do know is that one study found a moderate relationship between these ratings and supervisory performance ratings. We also know that these relationships were based on a small sample and that some of the other findings were not consistent with past research. This could call into question the stability of these findings.
- Assessments – Bottom line – a good assessment out predicts the results observed by these researchers. In fact, in hundreds of validation studies, our research at Select International shows considerably stronger relationships with supervisor ratings than social media ratings had in this study. The social media relationship is interesting, but it simply does not hold a candle to the types of relationships with performance that we see on a daily basis with good assessments.
When it comes to prediction, assessments take home the prize again.
In the end, when I think about the three most important considerations in predicting future behaviors (consistency, reliability, and validity), assessments win on all three levels. It is an interesting study and I would love to see more research; but I recommend we keep using our assessments – it isn’t time to jump ship just yet from your battle tested assessment programs.
Planning to do large volume hiring soon? Check out our Start-Ups and Expansions Survival Guide!