AI RESILIENCE IN HIGHER EDUCATION: DEVELOPING A DEFINITION AS A BASIS FOR TEACHING AND ASSESSMENT
Dresden University of Technology (GERMANY)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
AI* resilience is emerging as a critical sub-dimension of digital resilience in higher education. Its definition, pedagogical implications, and possibilities for assessment remain largely unexplored. While digital resilience frameworks such as DigComp 2.2 emphasise critical thinking, problem-solving, and responsible technology use, the rapid acceleration and opacity of generative AI create qualitatively new forms of cognitive strain, epistemic uncertainty, and vulnerability. In this sense, AI resilience represents “digital resilience on speed”: Without the capacity to question, verify, and resist seductive but flawed AI outputs, learners risk losing agency, adopting false information, and becoming dependent on systems they cannot fully understand. This raises urgent questions for universities regarding students’ ability to make autonomous judgements in AI-rich learning environments.
This contribution proposes a preliminary conceptualization of AI resilience as a competence that enables students to remain critical, self-determined, and ethically responsible when interacting with AI tools. It argues that AI resilience must be delineated as a distinct construct to enable the formulation of targeted learning outcomes and to support constructive alignment in course and curriculum design. Based on DigComp 2.2, current AI literacy research, and emerging pedagogical practice, the paper outlines a set of foundational components:
- dealing with ambiguity and black-box mechanisms
- assessing uncertainty in seemingly plausible outputs
- recognising hallucinations
- co-reflecting one’s own reasoning alongside AI suggestions
- maintaining personal agency
- understanding bias and fairness issues
- choosing tools purposefully
- deciding when not to use AI
- and assuming responsibility for outputs attributed to oneself, including transparency statements
While these components are theoretically motivated, their behavioural manifestations in student work remain empirically untested. The paper, therefore, discusses the need to develop a preliminary model of observable indicators that can inform teaching strategies as well as future assessment approaches while acknowledging the methodological and ethical challenges of measuring AI resilience.
For higher education, the emerging challenges are:
#1 to design learning environments and assessment formats that actively foster AI resilience rather than bypass it, and
#2 to explore how educators themselves can remain resilient amidst accelerating technological change.
To move this discourse forward, the contribution concludes with an invitation to the conference plenary to critically expand, challenge, and refine the proposed conceptualisation. Participants are encouraged to introduce additional discipline-specific, institutional, or pedagogical aspects that may be essential for a robust and context-sensitive understanding of AI resilience in higher education. This collective exploration aims to enrich and sharpen the construct, ensuring that its development reflects the diverse realities and needs of university teaching and learning.
*This text is structured, shortened, and translated from German using DeepL and ChatGPT.Keywords:
AI, AI Resilience, Higher Education.