DIGITAL LIBRARY
“SCHEMING” TO OPTIMISE MARKING IN COMPUTER PROGRAMMING: FROM MEMOS TO RUBRICS
Independent Institute of Education (Pty) Ltd. (SOUTH AFRICA)
About this paper:
Appears in: ICERI2012 Proceedings
Publication year: 2012
Pages: 869-877
ISBN: 978-84-616-0763-1
ISSN: 2340-1095
Conference name: 5th International Conference of Education, Research and Innovation
Dates: 19-21 November, 2012
Location: Madrid, Spain
Abstract:
This paper reports on a study that examined the reliability of students’ assessment results in the field of computer programming. Traditionally the assessment of computer programming students has been an area for concern. A memorandum supplied to the lecturers/markers by the examiner includes source code for a model answer with marks allocated to specific lines of code. This source code is used by the markers to allocate marks to the student’s programs based on the provided source code and it often results in students getting marked down inappropriately if their solution is not the same as the examiner’s or alternatively marked up if their solution is similar to the provided solution. The problem with the traditional memorandum style of awarding marks according to a “point-per-correct-statement” is that students are being graded based on how close their solution is to the examiner’s solution and little consideration is given to creativity and originality. This study originated from the authors’ need to explore possible ways to achieve a more objective, consistent, and fair assessment of a student’s programming solution.
A literature study revealed that strategies used to grade programming assessments has evolved from grading students based on a fixed memorandum where marks are allocated to individual programming statements to a more holistic and inclusive methodology using rubrics and marking schemes. These grading tools seem to enable the marker to assess the student’s creativity to produce a solution and seems allow for insight into the efficiency of code and user-friendly interface design.
Based on the above background the authors set out to examine the objectivity and consistency of the marking practice of a group of lecturers/markers of a C# program when using various scoring tools. To this end the lecturers were provided with an example student C# programming solution (that is flawed) and were requested to mark the C# program. Firstly, they were asked to use the traditional “mark-per-correct-statement” approach. Thereafter they were requested to use two variations of marking schemes/ rubrics with different formats and varying levels of detail. The marking process and marks awarded by lecturers were analysed and compared both in terms of inter-marker and intra-marker reliability.
The preliminary findings suggest that marking schemes/ rubrics assisted lecturers/markers to avoid awarding marks to students solely based on their solution mirroring the examiner’s solution. Instead, these schemes/ rubrics provide the necessary structure and guidance that enable lecturers to award marks for students’ problem solving ability, creativity, aesthetics of their graphical user interface and the use of good programming practice and programming standards. In addition, marking schemes/ rubrics improved consistency of assessment results.
We conclude that the use of marking schemes/rubrics to assess computer programming contribute to objectivity, consistency, and fairness.
The authors intend to conduct further analysis and use the results of the study to inform design of more objective, consistent, and fair scoring/ grading tools which focuses on students’ ability to analyse the problem, design a solution, create the solution and finally implement the solution and that this research will contribute to the body of knowledge related to optimisation of assessment practice in the field of computer programming.
Keywords:
Assessment strategy, programming, rubrics, grading scales, marking schemes.