DIGITAL LIBRARY
TEST CASE ANNOTATION OF PROGRAMMING TASKS IN THE PROGCONT SYSTEM FOR DIFFERENTIAL EDUCATION AND STRENGTHENING STAND-ALONE PREPARATION OPPORTUNITIES
University of Debrecen, Faculty of Informatics (HUNGARY)
About this paper:
Appears in: EDULEARN21 Proceedings
Publication year: 2021
Pages: 5433-5441
ISBN: 978-84-09-31267-2
ISSN: 2340-1117
doi: 10.21125/edulearn.2021.1109
Conference name: 13th International Conference on Education and New Learning Technologies
Dates: 5-6 July, 2021
Location: Online Conference
Abstract:
At the Faculty of Informatics of the University of Debrecen, we have been using the self-developed ProgCont system for automatic and objective evaluation of programming tasks since 2011. We initially developed the software to organise competitions, but we quickly involved it in teaching programming subjects. As an answer to the challenges caused by the pandemic, recently, we have tried to make the system suitable for supporting stand-alone preparation, which nowadays has an increasing role in education.

Until 2021, evaluating the solution submitted in the ProgCont system has only returned with accepted or rejected judgments. Even if the system evaluates a submission with several test cases, we could produce the acceptance rate only. In the case of a rejected solution, this tells nothing about the programming mistakes which cause the malfunctioning. To improve our system's feedback, we expanded the system with the possibility to annotate the problem's test cases so that we can assign several different labels to each. The labels briefly describe which situations are covered by the attached test cases. Whit the help of these, students and instructors can see not only a percentage of the test cases where the submitted program failed but also how these test cases differ from the ones where the solution was accepted.

Creating expressive annotations is a complex task, usually requiring rethinking and newly created test cases. We first tested this idea on a problem that was available for the public, and so was it widely solved. After the early promising results, armed with the first experience, we focused on an earlier organised exam and its tasks. This experiment gives the possibility to evaluate the students' performance, identify the typical programming mistakes, and compare the class groups involved in the exam.

In our article, taking advantage of the test case annotation option, we examine the set of tasks on the selected exam. The exam contained four assignments and completed by three class groups, with 16-17 participants in each. Three of the tasks had more than 100 submissions, which can be effectively analysed using the standard pedagogical research methods. We compare student's and each group's performance to give the instructors more significant insight, thus amplifying the opportunity for differentiated teaching. Given that the exam tasks are later always made public, a new set of well-annotated tasks will be available for preparing, making it much easier for the students to identify mistakes during programming. An annotated set of tasks can be used much more effectively for stand-alone preparation, the role of which is greatly appreciated in this period of the pandemic.
Keywords:
ProgCont system, automatic evaluation, test case annotations, programming tasks.