AUTOMATED SUBMISSION CHECKING: IMPROVING REMOTE LEARNING ECOSYSTEM FOR PROGRAMMING CLASSES
University of Aizu (JAPAN)
About this paper:
Conference name: 15th International Technology, Education and Development Conference
Dates: 8-9 March, 2021
Location: Online Conference
Abstract:
Introduction:
The digitally transformed world requires transformation of teaching and learning practices. This demand became especially clear during pandemic lockdowns of 2020. Programming classes require many specific activities besides traditional lectures and exercises. They assume higher degrees of interactivity and collaboration compared to individual assignments and are typically supported with a variety of tools like learning management systems, online meeting tools, version control, and submission assessment systems. Assessment of student contributions may require plagiarism detection tools, which are particularly helpful in the case of limited direct communication. Academic contributions to distant learning need to be understood from the perspective of daily teacher and student needs, the ability to make software engineering instruction better.
Objectives:
Our objective is to design and deploy a robust teacher-centric submission assessment system suitable for practical use in programming classes and based on the state-of-the-art source code analysis algorithms and code similarity visualization mechanisms facilitating quick evaluation.
Project Scope and Methodology:Introducing the tools for checking software code for illegal “borrowing” is not trivial both from methodological and technical perspectives. It is commonly presumed that the students must work on their projects independently and avoid collaboration. Such an approach favors developers’ creativity but does not help in training soft skills and abilities to collaborate. Next, engineering education aims to advance students’ abilities to understand and re-use successful practices and standard solutions. This assumption may sometimes be against pedagogical goals. Finally, there are few open source submission assessment and plagiarism detection systems that are designed specifically for online courses. Though there are significant achievements in code similarity checking algorithms, their classroom adoption is not straightforward, their compatibility with major online submission systems and automated code testing instruments is not guaranteed. We also need convenient visual interfaces helping the teachers to interpret the source code analysis results easier.
Expected Outcome:
Since the practical goal of the project is to automate teachers’ daily routine tasks, we believe that a convenient submission assessment instrument can be a noticeable contributing factor for better quality of the courses offered at educational institutions, especially where large groups of students are involved. The teachers will have more time to focus on course content, and the students will have better chances for fair grading. Our project mostly relies on existing methods reported in literature and available as open source solutions. Various software similarity detection algorithms are well known, but they implement different approaches (fingerprinting, string matching, tree matching) and may provide different degrees of reliability in a classroom setting. Most existing systems cannot be directly integrated into online course management systems like Moodle. We need to evaluate suitability of known algorithms and integrate them into a conventional submission grading interface featuring a visual reporting too for easier identification of potentially plagiarized submissions requiring more thorough analysis.
Acknowledgement:
The project has been supported by the University of Aizu research funding.Keywords:
Distant learning, programming, automated assessment.