DIGITAL LIBRARY
A SOFTWARE MODEL COMPARISON APPROACH DESIGNED FOR SELF-ASSESSMENT TUTORIALS
Technische Hochschule Mittelhessen (GERMANY)
About this paper:
Appears in: INTED2023 Proceedings
Publication year: 2023
Pages: 7581-7590
ISBN: 978-84-09-49026-4
ISSN: 2340-1079
doi: 10.21125/inted.2023.2069
Conference name: 17th International Technology, Education and Development Conference
Dates: 6-8 March, 2023
Location: Valencia, Spain
Abstract:
The design of software models is an essential part of almost all core curricula of computer science or software engineering study programs and as well an essential skill in industrial software engineering projects. In a model-based software development approach (i. e. using the Unified Modeling Language UML) the structural or behavioral properties of the software-system to be, are modeled with different software models (e. g., class diagrams, statechart, sequence diagrams, etc.). Moreover, a model-driven approach enables direct program-code generation from such software models using a code-generator. Consequently, modeling skills getting equal to programming skills.

Both, software model design as well as program formulation, must be practiced by the students, which is why most courses have mandatory exercises. From a teacher’s perspective, the results of these exercises (i. e. programs and software models) follow unfortunately not a canonical form, i. e. different syntactical and structural solutions might be correct to fulfill a given task. Hence, all potential correct solutions (students’ solutions) must be compared manually with a sample solution (teachers’ solution). This requires a high correction effort for the teaching staff. In turn, students will not benefit from getting just a sample solution that is different from their software model or program. A long-lasting learning approach should provide an individual correction of the individual solutions so that the students can learn from their modeling errors and mistakes.

To provide individual reports of similarities and differences of students’ solutions in an automated way, model difference tools might be a suitable option. Unfortunately, these tools focus on software models that have a substantial proportion of equally named model elements. This is often only the case in the model evolution scenarios. Tests with such tools show that the difference reports for models of various sources (teacher and student solutions) are not useful for modeling novices, since the differences are reported in a too detailed way. For example, different named classes (which use the labels synonymously) will not be matched, which results in numerous reported differences.

Based on these insights and the described requirements, we developed a software model comparing approach that provides feedback particularly designed for modeling beginners. The approach uses three kinds of comparison criteria:
a) Identifiers
b) Structural similarities and
c) Syntactical equivalences to match more than only equal named model objects.

The implemented tool provides a difference report, which contains useful information according to similar and different structures of the compared solutions. Additionally, a calculated matching score returns the similarity between the software models. Since the whole approach is automated the modelers (students) can immediately re-model their solution, based on the feedback of the prior comparison. Hence the tool is also suitable for self-assessments.

We evaluated the approach and the implemented tool with a set of test models (only statecharts) that contains different model issues.
Keywords:
Software models, model difference tools, automatic model evaluation, self-assessment, software engineering teaching.