CAN DIGITAL AND COMPUTATIONAL COMPETENCE BE ASSESSED? A SCOPING ANALYSIS ON RELATIONSHIP BETWEEN STUDENT DIGITAL COMPETENCE AND EXPLANATION OF ROBOT BEHAVIOUR
INDIRE (ITALY)
About this paper:
Conference name: 17th annual International Conference of Education, Research and Innovation
Dates: 11-13 November, 2024
Location: Seville, Spain
Abstract:
Digital Competence (DC) encompasses and is often used interchangeably with a wide range of skills, including digital literacy, digital citizenship, and computational thinking, which are essential for navigating today's digital environment. Despite ongoing efforts to define and assess DC through frameworks such as DigComp, existing standardised assessment tools often address only specific DC sub-domains and/or target populations, leaving gaps in comprehensive evaluation.
Similarly, existing reviews have also failed to grasp the full scope of DC due to its fragmented terminology, making keyword-based research ineffective. Moreover, most reviews have focused on research studies rather than examining the assessment instruments employed by the authors, which are often not included in their entirety leading to an incomplete understanding of the assessment landscape.
As part of a broader research project on "Human Explanation of Robotic Behaviour" (HERB), this contribution develops the foundations to examine the possible relationship between students' digital and computational competence and their explanation of robot behaviour.
We will address these issues by conducting a scoping review of standardised assessment tools published over the past 40 years, utilizing the comprehensive repository of the Institute for Advancing Computing Education (IACE) and supplemented by additional sources identified through a snow-ball based search. This dual approach ensures a thorough and representative review, capturing the full spectrum of available assessment tools for DC. Through the analysis of 160 instruments we analysed their public availability, target demographic, average completion time, number and type of items and assessed DC domain as well as evidence of validity and reliability, testing country and citations in relevant policy documents. Among our analysis results, we con note that only half (56.4%) of the identified instruments are available for free. These are largely developed and deployed in the USA (42.7%). Moreover, evidence of reliability and validity is only reported for 73.7% and 67.1% respectively of all available instruments.
Through this approach this review on one hand identifies existing gaps in the current assessment tools; on the other hand, it proposes pathways for more effective evaluation strategies. By leveraging insights from educational robotics and the concept of explainability, this study enriches the assessment and development of digital skills and computational thinking within educational contexts. This comprehensive analysis is crucial for advancing both understanding and practice in digital competence assessment.Keywords:
Robot behaviour, computation thinking, digital competence, assessment.