Universitat Politècnica de València (SPAIN)
About this paper:
Appears in: EDULEARN17 Proceedings
Publication year: 2017
Pages: 6031-6037
ISBN: 978-84-697-3777-4
ISSN: 2340-1117
doi: 10.21125/edulearn.2017.2367
Conference name: 9th International Conference on Education and New Learning Technologies
Dates: 3-5 July, 2017
Location: Barcelona, Spain
Problem solving is among the most widely used methodologies for competency – based education in engineering programs. The ability to find the most appropriate solution for a complex problem constitutes a core competence, frequently assessed using rubrics. Rubrics are detailed scoring guides, which list the assessment criteria and the expected levels of quality in relation to these criteria. This evaluation instrument allows for reliable assessment of multidimensional performances, while supports formative assessment through information about the expected progress of students. Although rubrics are frequently used at the school level, there is still some disagreement concerning reliability issues in higher education. The internal consistency of the scores can be influenced by the variation over different raters (inter – rater reliability) and across occasions within one single rater (intra – rater reliability). Researchers have shown that this latter source of variability might not be a major concern, provided that raters are supported by a rubric. In fact, previous work has mainly been focused on the proposal of rubrics aimed to assess problem solving, using different reliability measures. However, there has been little discussion on the criteria to choose the best method for performing the reliability analysis according to the psychometric properties of each rubric. This paper points out practical guidelines for examining the consistency of rubric scores, through the analysis of different methodologies for assessing inter – rater variability (percentage of agreement, Cohen’s kappa, correlation coefficients) and internal consistency (Cronbach’s alpha, composite reliability, average variance extracted). We thus collected the scores obtained by a group of 61 students, enrolled in a Bachelor’s Degree in Energy Engineering. Students were asked to solve an introductory statistics problem, and their solutions were assessed by five professors in this field of study. Findings showed interesting differences between the conclusions obtained through each methodology for assessing reliability, in good agreement with previous research. We are confident that our research will be valuable for all professors interesting in performing exhaustive and useful reliability analysis based on rubrics scores in higher education. Thus, this research may help professors to demonstrate that their assessments based on rubrics are trustworthy, as well as grounded on evidences without biased judgements.
Rubrics, problem solving, reliability.