Could not download file: This paper is available to authorised users only.

ASSESSMENT OF THE USE OF TECHNICAL SOFTWARE BY THE STUDENTS IN THE CONTEXT OF MECHANICAL ENGINEERING

In the framework of the European Higher Education area, university teaching has focused in recent years on adapting Master's and Bachelor's degrees to the demands of the professional sector. To do this, the training and development of the generic and specific skills recommended for the incorporation of students into the job market have been priority objectives in the approach to study plans. However, there is no consensus on the methodologies for evaluating these skills, especially regarding how to separate the acquisition and / or improvement of the skills from the specific knowledge and skills of the subjects. Due to the lack of time, teaching staff seek methodologies that do not involve additional tests for the evaluation of competences, which would increase the number of tests to a non-realistic number with the corresponding assessment duties for the professors.

In order to make a contribution in this regard, this work presents an approach for evaluating the ability to handle specific software applied to problems in the area of mechanical engineering. This work proposes a methodology for acquiring the required skills and an evaluation system to grade the degree of expertise in the manipulation of the software. In our University, this skill is called the Specific Instrumental Skill, which measures the ability of the students for using the tools in engineering, like, in this case, the use of software to run structural numerical simulations as ANSYS®.

The methodology proposed is based on an a priori training. This training is based on 2 hours weekly sessions where the students should solve, in groups of 2 or 3 students, a set of labs with the help of the professor. The students do not need to deliver any report to the professor since the objective of the sessions is the training of the students. Therefore, the pressure over the student is low and the professor avoid to mark a high number of student’s reports, allowing him to focus only on the learning process of the students and not on the evaluation during the training sessions. These labs increase the difficulty along a number of sessions. The last session consists in an exam in which the students must solve a lab similar to those already solved during the training sessions. This time, each student will work individually without the help of the professor and with a control of the time.

Finally, the performance of the methodology is checked by a cross-test for the same students who are part of the group of students of another subject (control subject) where the same tool (ANSYS®) is used. The collected data showed that the students following this methodology acquire the sufficient expertise for handling the software and their skills outperform those of the students of the control subject who did not follow the proposed methodology.

As a conclusion, the methodology proposed in this work guarantees a good level of expertise for the students, as shown by the results. Since the results in the final lab exam and the results of the cross-test coincides, the use of the final test exam could be interpreted as a good indicator of the degree of expertise in the use of the software. Additionally, the proposed methodology reduces the work load for the professor as it only requires assessing 1 report per student (instead of several reports for each group of 2 or 3 students in each of the session) while ensuring the authorship of the report.