<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=353110511707231&amp;ev=PageView&amp;noscript=1">

From Education to Certification Assessment: Reflections from Practice

October 24, 2019

iStock-483241349-1Most psychometricians specialize either in educational assessment or in certification and licensure assessmentAs a new psychometrician in PSI’s Certification division, I am one of the few who has crossed over from one into the other. After more than six years in K-12 educational assessment and fewer than 12 months in certification assessment, I can speak with cautious confidence on the areas of overlap and distinctive qualities of each. Rather than touch on each micro-level characteristic, I’ll speak to one of my first impressions upon my foray into certification testing, specifically how each field could stand to benefit from more frequent discussions with those on the other side.  


Both educational and certification testing are governed by standards that define best practices for assessment development and assessment use. For example, those in educational and certification testing both use the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014) as the guide for best practice, as the standards are applicable to both fields. Separate specific guidelines also exist for each group – state departments of education must adhere to federal peer review guidelines whereas certification programs who seek accreditation must adhere to either the Standards for the Accreditation of Certification Programs set forth by the National Commission for Certifying Agencies (NCCA, 2014), the International Standard ISO/IEC 17024 (2012), or, for nursing certifications, the accreditation standards defined by the Accreditation Board for Specialty Nursing Certifications (ABSNC, 2013).  

One of the main ways that educational and certification testing diverge is in the intended interpretation and use of scoresCertification testing operates under a pass/fail model – if the interpretation of scores holds true, then those who pass have earned the credential; this decision can be higher or lower stakes depending on the practice area. In education, scores are often used as an indicator of performance level achievement or as a determination for continuation to the next grade, the latter of which is certainly higher stakes. Performance levels are also sometimes used to place students in courses or even for teacher evaluations, despite neither of these being intended (or appropriate) uses of scores. In this way, the intended use of assessment scores in certification testing can be clearer, though both fields would do well to systematically ensure that test scores are used only as intended. 

Related: A Comparative Study Of Online Remote Proctored Versus Onsite Proctored High-Stakes Exams 

Which brings me to my next observation: certification assessment appears more nimble and adaptable, compared with the typically rigid nature of K-12 educational testing. Although state departments of education are bound by federal guidelines, educational assessment could benefit from some of the flexibility embraced by certification programs, especially for unique assessments or assessments for students with significant cognitive disabilities. It is also the case that certification assessment could learn a thing or two from educational assessment validity documentation. This is especially true for programs that may not be NCCA accredited – shouldn’t we still expect documentation on when, how, and why decisions were made even if a program is not tethered to the requirements of an accrediting body?  

As a self-proclaimed validity evaluation nerd, I continually see room for improvement in assessment, documentation, and measurement practices. I see room for improvement in myself as well. Coming from an educational measurement focus, I know that my colleagues in I/O psychology provide a different measurement perspective from which I often benefit. I think practitioners from both perspectives could benefit from more frequent a) conversations among all aspects of our measurement fields (inclusive of certification, licensure, psychological, and educational testing), b) conversations among leaders of certification and state/federal/private educational programs, and c) collaboration in actual practice. Conversation and collaboration are how we learn novel ways of dealing with problems that others may have encountered and how we discover new methodologies in our own work 

Read more: What's the Right Frequency for Statistical Analysis and Reporting?

 

New call-to-action

References 

Accreditation Board for Specialty Nursing Certifications. (2013). Accreditation standardsNew Jersey: Accreditation Board for Specialty Nursing Certifications. 

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.  

ISO Committee on Conformity Assessment. (2012). Internal standard ISO/IEC 17024 conformity assessment – General requirements for bodies operating certifications of persons. Geneva, Switzerland: International Organization for Standardization. 

National Commission for Certifying Agencies. (2014). Standards for the accreditation of certification programs. Washington, DC: Institute for Credentialing Excellence. 

Lauren Deters, PhD is a psychometrician at PSI, providing psychometric expertise and guidance to PSI’s certification clients, advising them on test development and committee selection as well as leading them through the test development process. Lauren is a life-long learner who is passionate about validity evaluation, advanced statistical analyses, and technical reporting. She earned her PhD in Educational Research, Measurement, and Evaluation at the University of North Carolina at Greensboro.