DO WE EVALUATE PROPERLY TO OUR MEDICAL SCHOOL STUDENTS?
A. Pino Vázquez1
, H. González García1
, M.B. Coco Martín2
, A. Mayo Iscar1
, R. Cuadrado Asensio2
, C. Villa Francisco1
, A. López Miguel2
, E. Urbaneja Rodríguez1
, C. Medina Pérez3
, M.J. Maldonado López2
, S. Rellán Rodríguez1
, M.J. Martínez Sopena1
, F.J. Alvarez Guisasola1
1Faculty of Medicine/University of Valladolid (SPAIN)
2IOBA/University of Valladolid (SPAIN)
3IBIOMED/University of León (SPAIN)
Health education requires compilation of knowledge, skills, attitudes and values needed to train future professionals. It is necessary to establish an appropriate educational process based on skills and strategies for a further comprehensive assessment of the skills acquired. The Objective Structured Clinical Examination (OSCE) is a test designed to assess clinical competencies. The originality to change this evaluation method is to reflect the importance of learning what takes place during the assessment and not just only what students know that will be evaluated. The power of this test is due to the mix of assessment methods, so it is able to explore enough three of the four levels of the pyramid Miller: knowledge, know how and show how. The teacher can see the students interacting with patients to assess their clinical skills, thinking skills, problem solving skills, integrating diagnosis and communication and interpersonal skills, i.e., a comprehensive assessment which requires professionalism.
To launch a new competency evaluation methodology: the Objective Structured Clinical Examination (OSCE) and compare the results with those obtained by the same students in a traditional evaluation short test and multiple choice questionnaire.
Material and Methods:
After carrying out a program of educational innovation in the Subject: Pediatrics " Clinical Practice " ( structured clinical rotation , practical content in a virtual classroom and simulation seminars ) a systematic review of competency assessment (OSCE) was performed . The design of the test was developed by the staging of a series of ten simulated clinical scenarios, that reflects actual practice, presented serially in physical places called stations. In addition, a literature search and knowledge test were placed, in a multimedia classroom. The marks obtained by students in 6th year of the Faculty of Medicine of Valladolid in the course of Pediatrics, in consideration of skills ( OSCE ) and the traditional exam ( with classical teaching methodology and assessment) were analyzed. We calculate the coefficient of intraclass correlation coefficient (ICC) to measure the agreement between two test results.
The mean total score obtained in the OSCE was 7.03 SD± 1.17 (range: 4.5 to 8.8). The average grade obtained in the classical examination was 7.55 points over 10 with SD ±1.53 (range: 3.79 to 9.26). The ICC for agreement between the two tests was 0.24 and the Pearson correlation coefficient was at a very low level, 0.34. This level of agreement could be considered irrelevant when we are comparing values in two tests that should at least theoretically give the same results. The average score for each of the stations with the rest of it, with the total score and the grades after the traditional exam was also correlated.
The use of objective structured clinical examination has been a great experience for both teachers and students. There isn’t a strong relationship between the final marks obtained in the traditional part of the teaching and the final notes of the competence assessment, following a program of educational innovation, which leads us to think more comprehensively when assessing evaluation test to our students at different skills levels.