BACK TO PAPER AND PEN: EVALUATING TRANSVERSAL COMPETENCES IN THE AGE OF GENERATIVE AI THROUGH MOBILELANTERN-ASSISTED TURN MANAGEMENT IN UNIVERSITY CLASSROOMS
1 Universidad Politécnica de Madrid (SPAIN)
2 Universitat Pompeu Fabra (SPAIN)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
The widespread adoption of generative AI has profoundly transformed students’ approaches to academic work, particularly in tasks involving documentation, synthesis, and written reflection. As a consequence, university instructors face increasing difficulty when assessing transversal competences, autonomy, and reflective thinking—especially in technology-oriented degrees where students are highly proficient in exploiting AI tools. This study presents an instructional experiment conducted at Universidad Politécnica de Madrid (UPM) that explores alternative evaluation strategies combining traditional formats (paper, pen, manual drafting) with controlled or guided uses of AI through an experimental mobile tool named MobileLantern.
The experiment was implemented in the Computer-Based Systems course, where students develop IoT projects in groups. To assess the transversal competence CT5—Planning and Organisation—students were required to produce two versions of the same assignment: an individual handwritten document (1 page) and a subsequent digital synthesis in any format (text, video, slides, etc.). The aim was to compare the depth, originality, and reflective quality of students’ work under different cognitive and technological conditions.
A controlled experimental design separated students into two classrooms based on their device type (Android vs. iOS). The Android group used MobileLantern, an app designed to orchestrate turn-taking, mediate access to AI (using constrained ChatGPT or LLaMA models), and support equitable question management during the activity. Turn prioritisation was guided through MobileLantern and visual cues (colour-coded lanterns). The iOS group worked without MobileLantern, relying either on free-choice AI tools or no AI at all, and managed questioning through hand-raising or flashlight–cup policies defined by the students themselves. Both groups performed the handwritten task first, followed by the digital one, each lasting 60 minutes and concluded by a post-test questionnaire in Moodle.
The experiment investigates three dimensions:
(1) the quality and depth of students’ handwritten vs. digital outputs;
(2) behavioural and cognitive dependence on AI for unfamiliar tasks; and
(3) the efficiency and perceived fairness of different turn-management strategies (MobileLantern vs. student-defined protocols).
Data collection includes qualitative feedback, interaction logs, number of questions asked to instructors, turn-taking metrics, and post-activity perceptions regarding AI reliance and reflective workload.
Preliminary observations indicate that handwritten work encourages deeper individual reasoning, reduces over-reliance on generative AI, and helps students articulate more authentic personal viewpoints about their role in the project team. Meanwhile, controlled AI use via MobileLantern provides a structured way to integrate AI without overshadowing students’ cognitive effort. This study contributes empirical evidence toward designing fair, meaningful, and AI-resilient evaluation practices in higher education, highlighting the relevance of reintroducing “paper and pen” approaches to preserve reflective learning in a context saturated by generative AI.Keywords:
Generative AI, Assessment Strategies, MobileLantern, Transversal Competences.