DIGITAL LIBRARY
EXPLORING LARGE LANGUAGE MODELS FOR THE EDUCATION OF INDIVIDUALS WITH COGNITIVE IMPAIRMENTS
Politecnico di Milano (ITALY)
About this paper:
Appears in: INTED2024 Proceedings
Publication year: 2024
Pages: 4479-4487
ISBN: 978-84-09-59215-9
ISSN: 2340-1079
doi: 10.21125/inted.2024.1161
Conference name: 18th International Technology, Education and Development Conference
Dates: 4-6 March, 2024
Location: Valencia, Spain
Abstract:
Large Language Models (LLMs) are a type of Generative AI, extensively trained on large sets of data to comprehend and generate human-like text. They possess the capacity to answer questions, provide information, and engage in text-based conversations, holding immense potential across various domains, including education. However, there's a growing concern that LLMs may contribute to a new digital divide, separating those proficient in utilizing these technologies from those who cannot, especially individuals with Cognitive Disorders (CD). CD is characterized by impairments in different abilities such as memory, perception, information processing, problem solving language and social cognition, and affects around 8% of the world's population (5 million people). Can LLMs be a source of support for the education of these persons?

Our investigation narrows its focus to a specific demographic—adolescents and young adults living with CD of moderate severity, whom we shall refer to as "our target group". Our research addresses two fundamental inquiries:
1. Can our target group effectively formulate questions to LLMs in a manner that enables them to receive pertinent support for their educational endeavors?
2. Is it possible for developers to adapt LLM behavior in a manner that renders it suitable for addressing the educational needs of our target group?

It is widely recognized that the effectiveness of an LLM hinges on the formulation of input text that provides clear, unambiguous and “complete” instructions. This task is the primary and most critical obstacle when considering the utilization of LLMs by individuals with cognitive impairments. Our 2 focus group sessions, each one involving 5 persons (15-25-year-olds) in our target group confirmed this issue who asked an LLM (ChatGPT) to help them in assigned education tasks. While participants could express their intentions to ChatGPT, specifying how results should be presented proved difficult. LLM responses, though semantically correct, were often overly complex and long, leading to user confusion, frustration, and dissatisfaction.

To address this challenge, and address our second research question, we engaged in prompt engineering and prompt tuning. We identified task-independent instructions ("prompts") to guide LLMs, incorporating communication guidelines tailored for individuals with cognitive disorders. These guidelines emphasized using simple and concise language, avoiding filler words, refraining from figurative language, and structuring sentences straightforwardly.

Multiple LLMs, including ChatGPT3.5, ChatGPT4, Bard, Llama2, and Palm2, were evaluated against these guidelines, both with and without prompts. An iterative process refined the prompts based on evaluations. We complemented the prompt design with a set of task-specific prompt components, and evaluated their benefits in subsequent focus group sessions with the same participants, who exhibited improved comprehension of LLM outputs, resulting in increased satisfaction.

The paper will delve deeper into the aforementioned research questions. It will provide a detailed exposition of the methods and procedures adopted in our investigation, and a description of a novel software architecture designed to implement the prompting approach, contributing to a better understanding of LLMs as powerful educational tools for individuals with cognitive disorders, supporting inclusivity in the ever-evolving landscape of AI technology.
Keywords:
Large Language Model, Special Education, Cognitive Disorders.