As a consultant with a PhD in Industrial/Organizational Psychology, I am no stranger to personality assessments or the bad rap that they receive in popular culture. Misunderstandings and gross oversimplifications abound, and even the most respected news sources often get the details very wrong. A recent New York Times article, “Personality Tests Are The Astrology Of The Office,” is a perfect example. So, I want to highlight exactly what the article got wrong — and why these types of persistent pop culture errors are so problematic.
Here are the top three false statements that I want to correct from the article.
The Myers-Briggs test is repeatedly mentioned as the gold standard for selection assessments. It was a huge red flag when this article described the Myers-Briggs as a hallmark of selection testing. This test has never been recommended for use in selection by any trained psychometrician or I/O psychologist and should only ever be used for development. In fact, the test wasn’t even recommended for selection by its original creators. The Myers-Briggs test has been criticized regarding its reliability and origin — and it’s good to be analytical of how a test was created, but it’s unfair to attack it for being a poor selection tool when it has never claimed to be one. Consider this strike one: an article about personality testing in the workplace that does not differentiate between testing for selection versus development.
Personality testing is just “H.R. for the Buzzfeed generation.” This article enjoyed highlighting the more silly-sounding outcomes that can occur from personality testing, like categorizing people as color types (“She’s being such a yellow right now…”). Disappointingly, there was no flip side mentioned: what about the many highly-researched, respectable tests that have been around for generations because of their strong predictive power? Just because bad tests exist does not mean all personality tests are akin to astrology. It is imbalanced to completely leave out the highly rigorous and advanced statistical methodology that goes into tests that are actually used for selection. Yes, some of the scoring terminologies are simplified to appeal to a lay audience, but that doesn’t make the science behind their conclusions any less sound. To imply that personality testing is simply a new fad for millennials is strike two.
Results from personality testing are a “black box” with no evidence base. This is perhaps the most frustrating takeaway from the article. The author seems to conclude that just because they themselves do not understand how personality tests work, there must not be a strong evidence base. Strike three, you’re out. There are a plethora of academic journals dedicated to psychometrics of personality testing, and these tests have consistently shown to be a significant predictor of future performance in the workplace without the adverse impact associated with cognitive ability tests. It’s easy to brush off the science of something you don’t understand, but if the author of the article had done more research, they would have learned about validation studies and the concept of showing very real and measurable relationships between test results and outcomes like performance, employee satisfaction, safety incidents, and so on.
The New York Times article made me realize that even though my profession seems straightforward to me, the concepts and topics that I work with are actually quite complicated and difficult to understand from an outsider’s perspective. These days, though, it is more important than ever to defend science in the face of skeptics or the uninformed.