<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=353110511707231&amp;ev=PageView&amp;noscript=1">

ATP ePoster Session: Applying Learning Sciences to Assessment Design

September 16, 2020

How compatible are learning sciences and psychometrics? Both fields make inferences about learner cognition based on manifested behaviorsHowever, they diverge in how they conceptualize the design and use of assessments. What are opportunities and challenges in bridging these two paradigms, and what does this mean for the future of assessment?  

ATP: Applying Learning Sciences to Assessment DesignWe are presenting this topic at the 2020 ATP Conference via an ePoster session. Psychometricians seek reliable ways of measuring knowledge in well-defined domains. Accordingly, assessments should be decontextualized to minimize confounding variables and maximize measurement reliability. This gives candidates equitable opportunities to demonstrate their knowledge and skills, regardless of context or test form. 

In learning sciences, a central challenge has been designing assessments in line with evolving theories of learningFor example, constructivist theory posits learning as an active, context-dependent process of constructing knowledge based on prior understandings. Learning scientists are therefore concerned about the degree to which candidates transfer knowledge to new situations by leveraging prior knowledge, context, tools, and other people in their learning environments. In this view, assessments should measure deep conceptual understanding and complex constructs that are predictive of success in the workplace, like capacity for scientific inquiry and collaboration. Ideally, assessments would be a part of the learning process, and the evidence would be knowledge traces extracted from video, audio, and log data 

You Might Also Like: Benefits to Using a Multi-Modal Approach in State Licensure Exams

The differences between the learning sciences and psychometrics stance toward assessments can be boiled down to four issues: what to measure, what counts as evidence, what inferences we can make based on this evidence, and what to do with this evidence. Underpinning these issues lies a fundamental divergence in how to view validity: learning scientists value complexity and ecological validity, whereas psychometricians value reliability and construct relevance 

Advances in computing power and artificial intelligence have the potential to bridge the two paradigmsMoreover, the digital world now affords a medium where we can observe many behaviors in a realistic setting. For example, researchers are using log files to measure more complex constructs such as collaborative problem solving and persistence. Such constructs are indicative of success in the workplace but have traditionally been difficult to measure. More data can help address issues of reliability and create a more nuanced understanding of learner knowledge.  

Rethinking traditional notions of validity for more innovative assessments may seem pie in the sky and not worth the effort. However, we need to be constantly adapting assessment to reflect contemporary practices and theories of learningAlready the landscape of learning has changed dramatically over the last few years. It’s difficult to say what kind of game-changing technologies five years from now will change the way we live, work, and learnAs a psychometrician and learning scientist, what I can say is that it’s an exciting time to be part of such a dynamic field and see the changes in store for assessments in the near future 

Natalie Jorion, PhD Natalie Jorion, PhD is a psychometrician at PSI Services interested in innovative item types, text mining, and learning analytics. She has developed Shiny apps for DIF panels and standard settings. She received her PhD in learning sciences from the University of Illinois at Chicago with a specialization in measurement and assessment and an Masters from Northwestern University. In 2016, she was awarded an NSF data science fellowship to create visualizations of learner interactions in informal multi-modal STEM games.