DIGITAL LIBRARY
USAGE OF AUTOMATED ASSESSMENT TOOLS FOR THE EVALUATION OF GUI-BASED PROGRAMMING ASSIGNMENTS
Universidad de Leon (SPAIN)
About this paper:
Appears in: INTED2018 Proceedings
Publication year: 2018
Pages: 3075-3080
ISBN: 978-84-697-9480-7
ISSN: 2340-1079
doi: 10.21125/inted.2018.0589
Conference name: 12th International Technology, Education and Development Conference
Dates: 5-7 March, 2018
Location: Valencia, Spain
Abstract:
Through the subject of Programming II (Programación II), a subject included in the second course of the Degree of Computer Science at the University of León (Spain) and taught by lecturers of the Department of Mechanics, Computing and Aeronautics Engineering of the mentioned University, the students are expected to be able to develop skills in advanced Java programming. Different Java-related topics are taught during the four months the subject takes. Backtracking, concurrent programming or graphical user interfaces (GUIs), among other topics, make it a very heterogeneous subject. All students need to be evaluated of at least one practical exercise for each topic. The final subject evaluation grade is, therefore, the sum of the marks of at least five practical exercises and one global exam.

Graphical User Interfaces (GUIs) are critical components of today's software engineering discipline. In our subject, we highlight the major importance of learning to create GUIs. To develop their abilities to GUI programming the students are introduced to the Swing Java library. Usually, the last and more important assignment proposed to students of the “Programming II” subject involves the creation of a GUI with this library.

The automated Assessment Tools (AATs) currently used at the University of Leon, Moodle and the Virtual Programming Laboratory (VPL), allow the automated evaluation of most programming assignments of the students. However, when the students are asked to develop an assignment with a GUI, the GUI itself (e.g. the correct positions of a button, the background color of a window, etc.) cannot be evaluated using Moodle nor VPL as they can only test exercises that show their output through the system console.

To solve this limitation, we tested the LIFT library from the Web-CAT community to assess a GUI-based exercise presented to a small group of students. Hence, for a total of 20 assignments to be evaluated, 4 were tested manually and 16 were tested with the developed AAT. The process for the test development included the preliminary development of the requested assignment, the test design (including 10 different workflows and a total of 31 tests), the development of the test case with the LIFT library which included a total of 249 asserts, and a step of test filtering to increase objectiveness of the assignment and fairness of the evaluation. Hence, a total of 25 tests were eventually considered in the evaluation.

In the present study both the manual and the automated grading times are measured and a comparative analysis between both (manual and automated) grading methods is provided, showing a significant saving of time required to evaluate each assignment with the automated grading method. The study shows that, taking into account the time used to implement the tests, the time saving can be obtained when the automatic grading is applied to a moderate number of students (Over 23 in our case).

Overall, the use of the LIFT library for the development of the specific tests used to evaluate our GUI-based exercise resulted in a very convenient solution for our purposes. The assessment here presented may help to detect the most appropriate AAT depending on the number of students and the characteristics of the assignment. Future improvements will focus on the integration of LIFT with Moodle to avoid the test running in an external Eclipse environment.
Keywords:
Programming, Assignment, Automated assessment tool, Graphical user interface.