ACCESSIBILITY, BIASES AND ETHICS IN CHATBOTS AND INTELLIGENT AGENTS FOR EDUCATION

E. Gutiérrez Y Restrepo1, M. Baldassarre2, J. G. Boticario1

1UNED (SPAIN)
2Fundación Sidar (SPAIN)
The use of chatbot is increasingly common in universities around the world, both for didactic support tasks and administrative support. On the other hand, awareness of the importance of inclusive education and, therefore, of the need to comply with the requirements of accessibility guidelines for web content (WCAG) in any digital content and web interface, is growing and is even required by law in most countries. But such awareness does not yet reach the interfaces of conversational agents or chatbot that are being developed.

Within the framework of the European project ACACIA co-funded by the Erasmus + program of the European Union, Artemisa, a chatbot dedicated to the fight against sexual harassment and the search for volunteers to promote the acceptance of diversity and tolerance, has been created using a framework that facilitates the generation and management of this type of artificial intelligences.

But to what extent Artemisa is accessible or not? Is the use of an instrument for good that presents barriers to some users ethically acceptable? What is the current accessibility status of chatbots that work on social networks?

This article seeks to answer this and other questions around the accessibility of chatbots, conversational agents or virtual assistant’s issue, and postulates the need for training in interculturalism and web accessibility to combat the biases that, precisely, such entities endowed with artificial intelligence are showing today and even leading to jail or suicide to some people.