AI CONTEXT COUNTS IN EDUCATION
1 University of Toronto (CANADA)
2 Southern Alberta Institute of Technology (CANADA)
About this paper:
Conference name: 17th annual International Conference of Education, Research and Innovation
Dates: 11-13 November, 2024
Location: Seville, Spain
Abstract:
How do we teach Artificial Intelligence (AI) readiness, in a contextualized way, to support more detailed learner understandings of AI systems and AI literacy? With the nature and meaning of literacy changing so rapidly in this digital age, it is essential to recognize context, including the differing social contexts of our AI-literate practices. In this time of unprecedented technology transience where AI systems are rapidly changing and expanding while our educational institutions remain largely embedded in traditional ways of teaching, how do we prepare our learners for the AI age?
To begin, we recognize the deictic nature of the term Artificial Intelligence and ask our students to view AI systems in their context. There is no specifically agreed-upon definition of AI – it can be viewed as a specific technology (technocentric); the next step in digital transformation; a field of scientific research and/or an autonomous entity typified in science fiction. The learners in our study reviewed the OECD’s technical definition of AI Systems and context, and the IEEE definition of data and AI literacy.
We gathered data from graduate education students in a newly developed condensed university course on AI Ethics in Education taking place over six weeks. We reviewed learners’ understandings of AI, AI systems, AI literacy, and AI in context. We also looked at the information students identified as required to make informed and responsible decisions on the use of AI systems for learning in different settings. The instructor provided an example contextual tool—Model Cards—that shows promise as a way to increase transparency among users, developers and stakeholders through information on the systems’ design, features, intended use, caveats and ethical considerations. A common reflection from students when applying this strategy to the AI systems they reviewed, was how useful such systems like Model Cards are in educational settings. Students in the course identified advantages to having access to information on the AI systems’ possible bias, training information, recommended uses and uses of collected data that might affect user privacy. Such information would be useful for determining risks in specific educational contexts, fostering trust and creating a sense of accountability, explainability and/or interpretability. Significant challenges with integrating AI systems into any educational context must be addressed because of privacy and accountability laws, regulations governing the teaching profession, and the societal protection of children, youth and students.
We identified 29 conversation notes that specifically addressed the concept of model cards in learners’ contents and reviewed 82,000 words that were used in weekly reflection documents. Within the reflection documents, the results indicated that the word ethics appeared 229 times, and appeared in 353 conversation notes throughout the course. The word context appeared 233 times in the reflection documents and in 323 conversation notes over six weeks. Finally, the word bias appeared in the reflection notes 169 times and appeared in 187 conversation notes overall. One education learner noted that it is critical to have model card information. The results suggest that more time and experience is needed to apply complex AI concepts to learners’ contexts. Concerns to be further explored include trust, explainability, interpretability, and accountability for any resulting harms.Keywords:
AI readiness, AI literacy, Model Cards, AI Systems, AI in Education, AI ethics.