How compatible are learning sciences and psychometrics? Both fields make inferences about learner cognition based on manifested behaviors. However, they diverge in how they conceptualize the design and use of assessments. What are opportunities and challenges in bridging these two paradigms, and what does this mean for the future of assessment?
We are presenting this topic at the 2020 ATP Conference via an ePoster session. Psychometricians seek reliable ways of measuring knowledge in well-defined domains. Accordingly, assessments should be decontextualized to minimize confounding variables and maximize measurement reliability. This gives candidates equitable opportunities to demonstrate their knowledge and skills, regardless of context or test form.
In learning sciences, a central challenge has been designing assessments in line with evolving theories of learning. For example, constructivist theory posits learning as an active, context-dependent process of constructing knowledge based on prior understandings. Learning scientists are therefore concerned about the degree to which candidates transfer knowledge to new situations by leveraging prior knowledge, context, tools, and other people in their learning environments. In this view, assessments should measure deep conceptual understanding and complex constructs that are predictive of success in the workplace, like capacity for scientific inquiry and collaboration. Ideally, assessments would be a part of the learning process, and the evidence would be knowledge traces extracted from video, audio, and log data.
You Might Also Like: Benefits to Using a Multi-Modal Approach in State Licensure Exams
The differences between the learning sciences and psychometrics stance toward assessments can be boiled down to four issues: what to measure, what counts as evidence, what inferences we can make based on this evidence, and what to do with this evidence. Underpinning these issues lies a fundamental divergence in how to view validity: learning scientists value complexity and ecological validity, whereas psychometricians value reliability and construct relevance.
Advances in computing power and artificial intelligence have the potential to bridge the two paradigms. Moreover, the digital world now affords a medium where we can observe many behaviors in a realistic setting. For example, researchers are using log files to measure more complex constructs such as collaborative problem solving and persistence. Such constructs are indicative of success in the workplace but have traditionally been difficult to measure. More data can help address issues of reliability and create a more nuanced understanding of learner knowledge.
Rethinking traditional notions of validity for more innovative assessments may seem pie in the sky and not worth the effort. However, we need to be constantly adapting assessment to reflect contemporary practices and theories of learning. Already the landscape of learning has changed dramatically over the last few years. It’s difficult to say what kind of game-changing technologies five years from now will change the way we live, work, and learn. As a psychometrician and learning scientist, what I can say is that it’s an exciting time to be part of such a dynamic field and see the changes in store for assessments in the near future.