A SURVEY ON GRADING FORMAT OF AUTOMATED GRADING TOOLS FOR PROGRAMMING ASSIGNMENTS
San Jose State University (UNITED STATES)
About this paper:
Conference name: 15th annual International Conference of Education, Research and Innovation
Dates: 7-9 November, 2022
Location: Seville, Spain
Abstract:
There is a shift to virtual platforms in academia due to advancements in computer hardware, better internet connectivity, and the onslaught of the pandemic. Grading programming assignments on an online platform is laborious, time-consuming, and error-prone. Increasing numbers of students and the popularity of distance learning are only adding to the problem. As a result, numerous automated tools are being used by many institutions to grade coding assignments. These tools are either integrated with Learning Management Systems (LMS) or are standalone tools.
Grading a coding assignment requires thorough evaluation. The code must work for all the possible scenarios, with optimized time and space complexity. It must be error-free, conforming to the problem specification and solving it with the correct approach. Automated grading tools perform these tasks to grade the coding assignment and provide quick feedback. They have reduced the burden on an instructor/grader and have enhanced student coding experience but how do they have different ways of specifying how to grade an assignment.
These tools check for programming errors, plagiarism, coding style, code design, output comparison, and structural similarity. They have a feedback mechanism, software metrics (time and space complexity), support for multiple programming languages, and other special features. Numerous surveys are studying and evaluating the automated grading tools on these features. They compare which of the above features are supported by which tool, but they do not do a comparison of how the evaluation criteria of the assignments are specified by the instructor.
This paper studies and compares the different methods that auto graders provide instructors to author the grading format of the assignments. Many of the core functions, such as specifying a given output for a given input are similar across many of the tools, but the mechanisms used to express other requirements such as hint generation or memory management may differ. Some tools use a GUI to author the grading format of the assignment while others use specification files and specially named files with test cases and other grading parameters. A grading specification of one tool may also be transformed without loss of expressibility into the specification of another tool. Using documentation and research papers on the tools, this survey evaluates the syntax and semantics of such specification files.
The authoring interfaces and specification formats of these tools enable a set of criteria to be used in evaluating the submission. As a result, some tools grade some assignments more extensively than others. This knowledge of formats, enables an instructor/grader to choose a tool and edit the grading format as per their assignment requirement while using that specific tool. This survey provides readers with a systematic comparison of the expressibility and compatibility of the available automated grading tools. Keywords:
Automated assessment, Computer science education, programming assignments, student learning experience.