USING RUBRICS TO ASSURE PEDAGOGICAL QUALITY: LESSONS FROM FACULTY EVALUATION OF CASE ACTIVITIES IN DIGITAL SKILLS TRAINING
1 Universidad Politécnica de Madrid (SPAIN)
2 Hochschule Heilbronn (GERMANY)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
The introduction of rubrics for evaluating case activities aims to improve consistency, transparency and formative value in competency-based digital training. While evaluation tools are common in the literature, little evidence exists regarding their ability to generate structured feedback that improves case design for varied learner profiles. This paper reports on the experience of professors involved in the UPM-Accenture University-Industry Chair on Digital Skills, who applied a rubric designed to evaluate the pedagogical quality of case activities integrated into online courses. Data come from the evaluators’ reports, which included both rubric scores across nine dimensions (relevance of case context; activation of transversal competences; transferability potential; stimulation of strategic thinking; progressivity and adaptability; alignment with learning objectives; formulation of reflective questions; quality of exploration guidance; complexity of the problem) and recommendations for adapting cases. The analysis examines how faculty reflections differ depending on professional profile (e.g. engineers, lawyers, physicians) and level of prior knowledge (novice to expert). It also identifies which dimensions of cases are perceived as more difficult to adapt and what modifications are suggested to maintain pedagogical effectiveness across contexts. The paper concludes by discussing how rubric-based evaluation can serve not only as assessment but also as a tool for continuous improvement in digital skills training, supporting inclusive and scalable implementations.Keywords:
Rubrics, faculty assessment, case-activities, digital skills, quality assurance.