”PROVOTYPES” EXPLORING HOW TEACHERS VALUE AI IN EDUCATION
University of Gothenburg (SWEDEN)
About this paper:
Conference name: 18th International Technology, Education and Development Conference
Dates: 4-6 March, 2024
Location: Valencia, Spain
Abstract:
Technologies with functionality classed as artificial intelligence (AI) are increasingly used in education. However, in development of educational technology the specific educational practices in which the technology will be used are often not fully considered (Börjesson et al., 2019). Not only as built in functionality for achieving particular educational goals, for example services including language technology to learn a language (e.g. Ericsson et al., 2023), or digital textbooks adapting to students’ current ability in maths to provide suitable tasks (Utterberg Modén et. al, 2021), but there is also a growing concern about the application of AI in education. This creates debate and drives legislation, considering AI in education as high risk.
The research presented here is oriented towards teachers, who are already taking on the responsibility for fairness in education. We report on a current project aiming to increase the agency of secondary school teachers concerning the design and use of AI-based systems in education to ensure fairness. This paper reports on the use of a method called “provotypes”, in a project where teachers, students, principals and educational developers worked with the topic of fairness in the use of AI. Providing insights both on how the methods were used as well as the outcome of the use, i.e. methodological results and results concerning values on AI in education.
The involvement of practitioners is important if not imperative to successful development of new technological solutions for the workplace (Fischer et al., 2004& Bødker et al., 2009). This is claimed to support both the quality of design, and later implementation of new technologies (Sanders & Stappers, 2008). Active involvement of relevant practitioners and stakeholders is central to ensure that new technologies do not endanger the freedom, values, and rights of practitioners (Bødker & Kyng, 2018). Tension arising from contradicting values within stakeholder perspectives have primarily focused on developing strategies to achieve consensus among stakeholders or stakeholder groups (Grönvall et al., 2016; Jonas & Hanrahan, 2022). Rather than avoiding these tensions, they can be used as a resource by creating provocative prototypes (provotypes) that represent these tensions. In this way, provotypes serve as tools for stimulating creativity and encouraging new ideas by challenging norms and values while designing for future practices (Mogensen, 1994).
The paper is guided by the research question: how do school stakeholders such as teachers, students, principals and school developers position themselves when confronted with Provotypes employing datadriven AI in educational practices?
Tentatively, our results show varying positions, particularly between adults and students. Surprisingly, adults wanted to implement student monitoring and data driven support, despite expressed concerns about algorithmic unfairness. This may stem from the clash of technological solutions with conflicting educational values. In contrast, students expressed more concern regarding increased datafication and algorithmic decision-making in their educational practices. Provotypes proved to be a viable method for eliciting practitoners values, allowing and provoking students as well as teachers to make their positions towards AI in education explicit. Hence providing valuable input for future design work within this area.Keywords:
Artificial intelligence, ethics, fairness, teachers, design.