About this paper

Appears in:
Pages: 6031-6037
Publication year: 2017
ISBN: 978-84-697-3777-4
ISSN: 2340-1117
doi: 10.21125/edulearn.2017.2367

Conference name: 9th International Conference on Education and New Learning Technologies
Dates: 3-5 July, 2017
Location: Barcelona, Spain

RELIABILITY ANALYSIS IN SCORING RUBRICS FOR ASSESSING PROBLEM SOLVING

Problem solving is among the most widely used methodologies for competency – based education in engineering programs. The ability to find the most appropriate solution for a complex problem constitutes a core competence, frequently assessed using rubrics. Rubrics are detailed scoring guides, which list the assessment criteria and the expected levels of quality in relation to these criteria. This evaluation instrument allows for reliable assessment of multidimensional performances, while supports formative assessment through information about the expected progress of students. Although rubrics are frequently used at the school level, there is still some disagreement concerning reliability issues in higher education. The internal consistency of the scores can be influenced by the variation over different raters (inter – rater reliability) and across occasions within one single rater (intra – rater reliability). Researchers have shown that this latter source of variability might not be a major concern, provided that raters are supported by a rubric. In fact, previous work has mainly been focused on the proposal of rubrics aimed to assess problem solving, using different reliability measures. However, there has been little discussion on the criteria to choose the best method for performing the reliability analysis according to the psychometric properties of each rubric. This paper points out practical guidelines for examining the consistency of rubric scores, through the analysis of different methodologies for assessing inter – rater variability (percentage of agreement, Cohen’s kappa, correlation coefficients) and internal consistency (Cronbach’s alpha, composite reliability, average variance extracted). We thus collected the scores obtained by a group of 61 students, enrolled in a Bachelor’s Degree in Energy Engineering. Students were asked to solve an introductory statistics problem, and their solutions were assessed by five professors in this field of study. Findings showed interesting differences between the conclusions obtained through each methodology for assessing reliability, in good agreement with previous research. We are confident that our research will be valuable for all professors interesting in performing exhaustive and useful reliability analysis based on rubrics scores in higher education. Thus, this research may help professors to demonstrate that their assessments based on rubrics are trustworthy, as well as grounded on evidences without biased judgements.
@InProceedings{CONCHADO2017REL,
author = {Conchado, A. and Mart{\'{o}}n, I. and Alcover Arandiga, R. and Villanueva L{\'{o}}pez, J.F. and B{\'{a}}s Cerd{\'{a}}, M.C. and V{\'{a}}zquez Barrachina, E. and S{\'{a}}nchez Gald{\'{o}}n, A.I. and Carri{\'{o}}n Garc{\'{i}}a, A. and Carot Sierra, J.M.},
title = {RELIABILITY ANALYSIS IN SCORING RUBRICS FOR ASSESSING PROBLEM SOLVING},
series = {9th International Conference on Education and New Learning Technologies},
booktitle = {EDULEARN17 Proceedings},
isbn = {978-84-697-3777-4},
issn = {2340-1117},
doi = {10.21125/edulearn.2017.2367},
url = {http://dx.doi.org/10.21125/edulearn.2017.2367},
publisher = {IATED},
location = {Barcelona, Spain},
month = {3-5 July, 2017},
year = {2017},
pages = {6031-6037}}
TY - CONF
AU - A. Conchado AU - I. Martón AU - R. Alcover Arandiga AU - J.F. Villanueva López AU - M.C. Bás Cerdá AU - E. Vázquez Barrachina AU - A.I. Sánchez Galdón AU - A. Carrión García AU - J.M. Carot Sierra
TI - RELIABILITY ANALYSIS IN SCORING RUBRICS FOR ASSESSING PROBLEM SOLVING
SN - 978-84-697-3777-4/2340-1117
DO - 10.21125/edulearn.2017.2367
PY - 2017
Y1 - 3-5 July, 2017
CI - Barcelona, Spain
JO - 9th International Conference on Education and New Learning Technologies
JA - EDULEARN17 Proceedings
SP - 6031
EP - 6037
ER -
A. Conchado, I. Martón, R. Alcover Arandiga, J.F. Villanueva López, M.C. Bás Cerdá, E. Vázquez Barrachina, A.I. Sánchez Galdón, A. Carrión García, J.M. Carot Sierra (2017) RELIABILITY ANALYSIS IN SCORING RUBRICS FOR ASSESSING PROBLEM SOLVING, EDULEARN17 Proceedings, pp. 6031-6037.
User:
Pass: