About this paper

Appears in:
Pages: 5691-5700
Publication year: 2013
ISBN: 978-84-616-2661-8
ISSN: 2340-1079

Conference name: 7th International Technology, Education and Development Conference
Dates: 4-5 March, 2013
Location: Valencia, Spain

PROGRAMMING ASSIGNMENTS AUTOMATIC GRADING: REVIEW OF TOOLS AND IMPLEMENTATIONS

J.C. Caiza, J.M. Del Alamo

Universidad Politécnica de Madrid (SPAIN)
Automatic grading of programming assignments is an important topic in academic research. It aims at improving the level of feedback given to students and optimizing the professor’s time. Its importance is more remarkable as the amount and complexity of assignments increases. Several studies have reported the development of software tools to support this process. They usually consider particular deployment scenarios and specific requirements of the interested institution. However, the quantity and diversity of these tools makes it difficult to get a quick and accurate idea of their features.

This paper reviews an ample set of tools for automatic grading of programming assignments. The review includes a description of every tool selected and their key features. Among others, the key features analyzed include the programming language used to build the tool, the programming languages supported for grading, the criteria applied in the evaluation process, the work mode (as a plugin, as an independent tool, etc.), the logical and deployment architectures, and the communications technology used. Then, implementations and operation results are described with quantitative and qualitative indicators to understand how successful the tools were. Quantitative indicators include number of courses, students, tasks, submissions considered for tests, and acceptance percentage after tests. Qualitative indicators include motivation, support, and skills improvement. A comparative analysis among the tools is shown, and as result a set of common gaps detected is provided. The lack of normalized evaluation criteria for assignments is identified as a key gap in the reviewed tools. Thus, an evaluation metrics frame to grade programming assignments is proposed.

The results indicate that many of the analyzed features highly depend on the current technology infrastructure that supports the teaching process. Therefore, they are a limiting factor in reusing the tools in new implementation cases. Another fact is the inability to support new programming languages, which is limited by tools’ updates. On metrics for evaluation process, the set of analyzed tools showed much diversity and inflexibility.

Knowing which implementation features are always specific and particular independently of the project, and which others could be common will be helpful before the implementation and operation of a tool. Considering how much flexibility could be attained in the evaluation process will be helpful to design a new tool, which will be used not only in particular cases, and to define the automation level of the evaluation process.
@InProceedings{CAIZA2013PRO,
author = {Caiza, J.C. and Del Alamo, J.M.},
title = {PROGRAMMING ASSIGNMENTS AUTOMATIC GRADING: REVIEW OF TOOLS AND IMPLEMENTATIONS},
series = {7th International Technology, Education and Development Conference},
booktitle = {INTED2013 Proceedings},
isbn = {978-84-616-2661-8},
issn = {2340-1079},
publisher = {IATED},
location = {Valencia, Spain},
month = {4-5 March, 2013},
year = {2013},
pages = {5691-5700}}
TY - CONF
AU - J.C. Caiza AU - J.M. Del Alamo
TI - PROGRAMMING ASSIGNMENTS AUTOMATIC GRADING: REVIEW OF TOOLS AND IMPLEMENTATIONS
SN - 978-84-616-2661-8/2340-1079
PY - 2013
Y1 - 4-5 March, 2013
CI - Valencia, Spain
JO - 7th International Technology, Education and Development Conference
JA - INTED2013 Proceedings
SP - 5691
EP - 5700
ER -
J.C. Caiza, J.M. Del Alamo (2013) PROGRAMMING ASSIGNMENTS AUTOMATIC GRADING: REVIEW OF TOOLS AND IMPLEMENTATIONS, INTED2013 Proceedings, pp. 5691-5700.
User:
Pass: