<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=353110511707231&amp;ev=PageView&amp;noscript=1">

A Deep Dive into Cognitive Levels and a Case for Simplification

June 25, 2020

If this is your first time hearing about the concept of human cognition as it relates to certification examination items, you’re not alone. It’s not a particularly high-priority subject among psychometricians because, after all, we cannot calculate an index on it. 

Inevitably, it cannot be a subject we ignore either. The concept of an item’s cognitive level, or thought process required to arrive at the correct answer, is an integral piece to developing a high-quality certification exam. 


cognitive levels blog image

The notion of item cognitive complexity is derived from a 1956 publication by educational psychologist Benjamin Bloom – The Taxonomy of Educational Objectives: The Classification of Educational Goals. Dr. Bloom was discouraged by the state of educational tests with regard to the preponderance of items testing lower-level thinking skills. His intent was to persuade writers for educational objectives to write items that engage different levels of cognition, in particular, higher-order thinking skills. His publication outlined a rubric for six levels of increasing cognition objectives including knowledge, comprehension, application, analysis, synthesis, and evaluation. In other words, the main intent of the rubric was to ensure that not all educational objective items are written to elicit recall of facts; rather, that they require some additional amount of thought in order to respond.  

Notice that in the above paragraph that the word “education” is heavily emphasized. Suffice it to say, the educational and certification landscapes are different in scope and outcome. Whereas in the educational landscape, a psychometrician is interested in classifying students into a multi-level tier of proficiency (e.g., basic, intermediate, advanced), psychometricians within the certification landscape are interested in whether a candidate fits into one of two classification levels: competent or not. So how does a rubric developed with educational objectives in mind translate to the world of certification exams? 

The point of cognitive levels in credentialing is to ensure exam content validity. An exam that reflects the complexity of situations lends one piece of content validity evidence. That said, not all items on an exam need to be complex because many important tasks in any job may not be complex. So, as a professional working with credentialing exams, how prescriptive do we need to be about cognitive levels? Not all certification programs are built the same, and thus, there is wide variation on how cognitive levels are addressed and adopted within certification programs. Some programs use Bloom’s taxonomy verbatim, others often rework (and often condense) the levels into different groupings inspired by Bloom (e.g., recall, application, and analysis), and some don’t address cognitive levels at all.


All that said, in my experience, almost every item can fall into one of two categories, recall and not recall. It doesn’t matter to me how “not recall” is termed (e.g., application, analysis, evaluation, synthesis). Whatever it is, we can be comfortable knowing that this item is measuring something other than information memorized from a textbook. “Not recall” items go beyond whether an examinee knows something; instead, these items venture into the territory of whether an examinee is able to do something.

It is my personal experience that when exam committees use a three-tiered cognitive classification scheme, they are often faced with some discomfort when making a final determination of the item’s cognitive complexity. When asked to verify the cognitive complexity of a “not recall” exam item in a three-tier rubric, I often hear “It’s on the fence. Where do we need it more?” Theoretically, this is because there is considerable overlap, particularly at the two higher levels. In these times of unease, exam committees force items into a level for superficial reasons. This situation highlights the subjective nature of the rubric and works against our goal of making the world a more objective one. 

Whether cognitive levels are used as part of your exam blueprint or not, it is the responsibility of a psychometrician to educate item writers and reviewers regarding different levels of cognition. After all, this training will prevent us from developing an exam that consists exclusively of recall-type items – which is as Dr. Bloom intended. After training, however, the level of complexity on an exam should belong in the hands of the exam committee, with guidance from the psychometrician. The cognitive level requirements for exam items as a requirement of a blueprint should not place undue burden for developing the exam. Exam committees should not be forced to classify items based on need. Alleviating these restrictions with a simplified two-tiered classification (or having no cognitive classification system at all) may be the best approach with certification exams to ensure that the measurement is sound and job-related. 

New call-to-action

Chris Traynor Chris Traynor is a Senior Psychometrician at PSI. Chris came to PSI via the acquisition of Applied Measurement Professionals, and he is the longest-tenured psychometric staff person in the PSI certification department. In his current role, he applies his technical expertise in psychometrics, statistics, and credentialing to the design, development, and evaluation of certification assessments. He also manages a portfolio of client examination programs, providing psychometric consultation and guidance to clients. He received his B.A. in Psychology from Wayne State University and his M.S. in Industrial/Organizational Psychology from the Missouri State University.