DIGITAL LIBRARY
RUBRICS FOR ASSESSMENT: IMPROVING THE ROLE OF THE RATER
University of the Basque Country (SPAIN)
About this paper:
Appears in: EDULEARN23 Proceedings
Publication year: 2023
Pages: 8454-8460
ISBN: 978-84-09-52151-7
ISSN: 2340-1117
doi: 10.21125/edulearn.2023.2215
Conference name: 15th International Conference on Education and New Learning Technologies
Dates: 3-5 July, 2023
Location: Palma, Spain
Abstract:
Rubrics are instruments for learning and assessment with a solid tradition in education that can improve the quality of the teaching learning process (Elosua, 2022a). Especially suited for the evaluation of production tasks, rubrics help to specify the expected general performance and the levels of competence that a person can reach in the development of a skill. As Brookhart (2013, p.4) says a rubric is a “a coherent set of criteria for students’ work that includes descriptions of levels of performance quality on the criteria” (Brookhart, 2013, p. 4). A well-constructed rubric operationalize learning objectives by listing criteria, and for each criterion, describing levels of quality.

Those levels of quality are assessed by judges, so scoring is a key step to guarantee the validity of the assessment process. While in many testing contexts scoring is almost undisputed, in rater-mediated assessment it is still a matter of discussion (Elosua, 2022b). The systematic biases and/or measurement errors which can be introduced by raters (for example, severity, halo effect, tendency to centrality, range restriction, lack of precision, inconsistency or the very interpretation of the evaluation criteria and scales), can impact the reliability and validity of the scores and therefore call into question the quality and equity of the assessment.
Given that the scoring is highly susceptible to the influence of the raters, it is important to recognize that biases can arise as a result of the application of rubrics. In the field of education where student performance is constantly being measured, any operational procedure for assigning scores based on rubrics should be improved by analyzing the way in which raters interpret and apply the rubrics.

The aim of the present study is to identify types of biases or sources of error associated with rater-mediated assessment in a written production task. It is an empirical study with a sample of 1,000 examinees and 300 raters, who independently scored each task using an analytical approach based on 5 criteria. The results show the presence of different raters’ profiles. The paper offers some recommendations to control the bias associated with raters mediated-assessment.

References:
[1] Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Association for Supervision & Curriculum Development.
[2] Elosua, P. (2022a). Validity evidences for scoring procedures of a writing assessment task. A case study on consistency, reliability, unidimensionality and prediction accuracy  Assessing Writing, 54. doi: https://doi.org/10.1016/j.asw.2022.100669
[3] Elosua, P. (2022b). El desafío de la evaluación formativa en la educación superior. In Natalia Esteban César Cáceres, Daniel Becerra, Oriol Borrás, Irene Ros, José Luis López (Ed.), El reto de la evaluación en la enseñanza universitaria y otras experiencias educativas (pp. 31-38). Dykinso, S. L.
Keywords:
Rubrics, assessment, rater, biases, systematic biases.