FROM AI USE TO MEDIA AWARENESS: ORGANISATIONAL CULTURE AND THE ADOPTION OF AN AI‑BASED TUTORING SYSTEM
IU Internationale Hochschule (GERMANY)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
This paper examines how AI‑based tutoring systems reshape media awareness and the handling of (mis)information in higher education. It focuses on the implementation of the chatbot “Syntea” at a distance‑learning university. Syntea answers study‑related questions on the basis of course materials and is quality‑assured by instructors in a human‑in‑the‑loop approach (Dwivedi et al., 2023; Kasneci et al., 2023). Theoretically, the study draws on Schein’s model of organizational culture (artifacts, espoused values, basic assumptions; Schein, 2010) and on concepts of media and AI literacy such as critical information appraisal, bias awareness and transparency of algorithmic systems (Gimpel et al., 2023).
Three research questions guide the study:
(1) Which basic assumptions about learning, technology and ethics shape media‑related understandings of AI in higher education?
(2) Which values structure perceived opportunities and risks of generative AI for media awareness, including support, dependency and misinformation?
(3) Which observable usage practices (artifacts) emerge and how do they reflect different levels of media and AI literacy?
A case‑based mixed‑methods design with convergent parallel strands was applied (Döring & Bortz, 2016). A semi‑standardized online questionnaire was administered to 282 participants (234 students, 48 instructors; mainly distance‑learning social‑science programmes). Closed scales and open questions were developed along Schein’s three cultural levels. Quantitative analysis comprised descriptive statistics and role comparisons; qualitative data were examined using thematic content analysis with combined deductive–inductive coding (Steiner & Bensch, 2015).
Findings show a pronounced fairness‑ and safety‑oriented set of basic assumptions: preventing discrimination, data protection, reliability and legal compliance are rated as “very important” for AI systems by a clear majority. At the same time, 84 % agree that AI cannot replace the human component in education, framing learning as a fundamentally social process. On the value level, learner support, efficiency gains and creative stimulation are seen as main benefits, whereas loss of human interaction, technological dependency and insufficient media and information literacy are perceived as key risks. Many respondents highlight the danger of misinformation and the need to verify AI outputs.
At the artifact level, AI use has rapidly normalised: around half of respondents use AI at least weekly, mainly for clarifying questions, text work, research and exam preparation. General‑purpose large language models are evaluated more positively than the institutional system. While a considerable share regularly checks AI‑generated information, many report encounters with AI‑supported conspiracy narratives, indicating heterogeneous levels of media awareness.
On this basis, a “culture‑fit model” is proposed: AI artifacts are experienced as legitimate and conducive to learning when they:
(a) align with shared ethical basic assumptions,
(b) respect values such as support, autonomy and interaction quality, and
(c) are embedded in practices that demand critical information appraisal, transparency and dialogic reflection.Keywords:
AI‑based tutoring, systems media awareness, organisational culture, higher education.