DIGITAL LIBRARY
BALANCING THE SCALES – REVISITING THE RELIABILITY OF A GLOBAL RATING SCORE OF ORAL LANGUAGE ASSESSMENT WITHIN A CLINICAL CONTEXT
University of Cape Town (SOUTH AFRICA)
About this paper:
Appears in: ICERI2016 Proceedings
Publication year: 2016
Pages: 137-145
ISBN: 978-84-617-5895-1
ISSN: 2340-1095
doi: 10.21125/iceri.2016.1029
Conference name: 9th annual International Conference of Education, Research and Innovation
Dates: 14-16 November, 2016
Location: Seville, Spain
Abstract:
At the University of Cape Town (UCT), which is an English medium tertiary institution, considerable effort has been made to promote multilingualism. Since 2003 the Faculty of Health Sciences in particular has included the learning of Afrikaans and isiXhosa (Languages) as an integral part of the Bachelor of Medicine and Bachelor of Surgery (M.B.Ch.B.) curriculum. In years two and three of the six-year curriculum, Afrikaans and isiXhosa are assessed as individual stations in an integrated Oral Simulated Clinical Examination (OSCE). Since the inception of the language courses, the tool for assessing oral language competence has systematically evolved to meet the needs of an authentic clinical context whilst simultaneously pursuing validity and reliability. In its current form, the assessment tool records a test taker’s score as a percentage and employs a combination of a criterion-based and an index of performance-based assessment of oral competence. A key feature to the tool is that it enables detailed feedback to be communicated to test takers on their competency in the target language after an assessment event. It is this feature in particular that has aided the migration of oral language competency assessment at UCT away from assessment by means of global rating scale only. After more than a decade of refining the oral language assessment tool, it is worthwhile to revisit scoring by global rating and establish how it compares to the scoring component of the current tool.

This paper principally investigates the reliability of a global rating / general impression score allocated by an examiner in an Afrikaans station during an OSCE in relation to the score otherwise obtained from the existing hybrid criterion-based assessment tool. The data which informs this study has been recorded chronologically and reflects the scoring patterns of participating examiners over a two- day assessment event.

The data allows for three critical points to be explored. Firstly, the average variance in scores per examiner are presented in order to determine the range of variance across examiners and whether the variance in scores are significant relative to the nature and purpose of the assessment. Secondly, contributing factors on the variance between the two scores are discussed. The variables include: the level of experience of the examiner, the timing of the assessment, a change in the topic of assessment, examiner fatigue and the effect of scheduled rest periods during the assessment schedule. Lastly, it is discussed whether assessment of oral language competence within specified contexts by general impression alone has any merit.
Keywords:
Oral language assessment within a clinical context, oral language competency testing in Health Sciences, reliability of global rating scores.