Applying Kane’s Validity Framework to Online OSCEs
The methods we use to assess healthcare students’ clinical ability, and justify our evaluations, have come to the fore in the context of competency-based healthcare education. The emphasis on examining the validity of student assessments more robustly is particularly relevant following the rapid transition to novel online examinations because of campus closures due to COVID-19 and repeated calls from many students and educators for online assessments to continue long-term. This paper describes the design and development of two online Objective Structured Clinical Examinations (OSCEs) and the application of Kane’s (2013) established validity framework to the online OSCE for an undergraduate speech and language therapy programme. Assessment claims were produced, and evidence was gathered to rationalise these claims to generate a validity argument, which identified strengths and gaps that need to be addressed for future online OSCEs. The description of the process in this paper provides a theoretical and practical template for producing a validity argument for online OSCEs across a range of healthcare disciplines, and indeed for other common student assessment methods. Traditionally, a frequent method reported for deciding the value of assessments is solely capturing the perceptions of students and educators, which overlooks many of the other theoretical aspects of validity. This in-depth focus on producing a validity argument can enable educators to make a more objective, structured, holistic, and critical decision about whether the intended uses of student grades achieved from their chosen assessment can be defended.
All articles published in AISHE-J are released under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence.