DIGITAL LIBRARY
REPLACING A STRUCTURED ORAL EXAMINATION BY A COMPUTER-BASED ASSESSMENT FOR MEDICAL MASTER STUDENTS: FROM ELOQUENCE-BASED TO EVIDENCE-BASED EXAMINATION?
1 Faculty of Medicine Geneva (SWITZERLAND)
2 University Hospital of Geneva (SWITZERLAND)
3 Institute for Simulation & Interprofessional Studies Seattle WA (UNITED STATES)
About this paper:
Appears in: INTED2011 Proceedings
Publication year: 2011
Pages: 5686-5690
ISBN: 978-84-614-7423-3
ISSN: 2340-1079
Conference name: 5th International Technology, Education and Development Conference
Dates: 7-9 March, 2011
Location: Valencia, Spain
Abstract:
Background
The last three years (master) of the six year medical curriculum at the University of Geneva are dedicated to the acquisition of clinical knowledge and competences in clinical settings. In areas such as internal medicine or paediatrics, formal evaluation is usually divided in two phases: an objective structured clinical examination (OSCE) with standardised patients and a structured, oral examination (SOE) based on the resolution of paper clinical scenarios assessed by a principal examiner (a faculty member) an a co-examiner (a chief resident). In both situations the examiners fill in a grid that allows the computation of a score. Since 2008, the SOE has been progressively replaced by a computer-based assessment (CBA) that evaluates the students’ ability to solve several clinical scenarios. Each scenario is divided into sections of one or more questions. During the test, these sequential sections are given successively, so as to mimic the SOE.
Methods
Within the internal medicine setting, we quantified the human resources required for the SOE and for the CBA. We compared the main results of the assessments of the last two years of SOE and the first two years of CBA. We finally investigated how these results were correlated with the continuous bedside assessment made during the year in clinical settings.
Results
Compared to an oral exam of 40 minutes per student in a class of 120 students, administering the CBA saved manpower (9.8 man-days for faculty members, 9.3 man-days for chief residents, 2.5 man-days for external clinical experts, and 4 man-days for office assistants), while the increase in the demand for computer scientists was 2.9 man-days. The reduction in manpower for a single exam easily covered the annual licence fees for the on-line examination program.
The internal consistency (Cronbach alpha coefficients) was similar for both types of examination (between 0.8 and 0.9). The number of items was about twice lower for CBA (12.7 vs. 25.9; p<.001) but these items were usually more complex than the dichotomous ones of the SOE grid. More items of the CBA were removed for the final ranking (7.3% vs. 1.0%; p=.001). Regarding the clinical scenarios, the average score was lower for the CBA (-3.0%; p<.001), and the intra-student variability was similar (p=.467), but the between-student variability was larger for the SOE (p<.001).
The CBA was slightly better correlated with the continuous bedside assessments made in the clinical settings than the SOE (R2=0.181 vs. R2=0.105), especially with the items regarding the association with the clinical documentation and the patient care management.
Conclusion
The shift from oral to computer-based assessment has been broadly accepted by both the students and their examiners. Lack of major changes in reliability and an improved correlation with the continuous assessment in clinical settings support our intent to implement the CBA to other clinical learning units of our curriculum. We emphasize the importance of the quality of the CBA test items, since unlike SOE, this format does not allow the examiners to clarify ambiguous questions.
Keywords:
Medicine, computer based assessment.