EXPLAINABLE AI FOR DATA DRIVEN LEARNING ANALYTICS: A HOLISTIC APPROACH TO ENGAGE ADVISORS IN KNOWLEDGE DISCOVERY
1 Florida Institute of Technology (UNITED STATES)
2 University of North Carolina at Charlotte (UNITED STATES)
About this paper:
Conference name: 14th International Conference on Education and New Learning Technologies
Dates: 4-6 July, 2022
Location: Palma, Spain
Abstract:
Learning analytics (LA) is the measurement and analysis of data relating to students and educators with the goal of improving academic performance and improving the efficiency of computer education research. A variety of analytical tools have been developed to extract this knowledge. These tools are designed to understand students' patterns of behavior towards success and failure, which can then be used to develop intervention methods. Although the student data or analytical models provide performance descriptions, they do not really help comprehend the "why" and "how" behind the analysis. With these tools, domain experts are supposed to be able to predict student outcomes and plan their interventions more accurately. However, the analytical process essential to knowledge discovery needs substantial data science skills. So, the domain experts do not get the chance to engage in the discovery process since the analytical model is a black box to them. In this paper, we refer to domain experts as teachers, educational leaders, and academic advisors who are experts in student advising, educational data, and not data scientists.
It is our goal to help domain experts better understand students by having them construct analytical models themselves and explore the reasoning behind them. Using domain experts in the analytics life-cycle, we are building an analytical tool to give flexibility to the domain experts. They can choose from a variety of features and examine the factors that contribute most to a student's performance and behavior. We also aim to incorporate explainable machine learning to create helpful insights and intervention methods by opening the black box model for analytics. We believe it will help academic advisors better understand their students, and the “why” behind the particular predictive results. We incorporate explainable machine learning with visualization of the features using some state-of-the-art tools (LIME [1], SHAP[2], etc) to better understand the features being used for analysis. Overall, our approach differs from the previously developed tools as follows:
1. We involve domain experts in the lifecycle of the analytics by allowing them to change the features to be used for the analytics to gain more insight into the interesting, known and unknown facts about the outcomes.
2. We direct the domain experts in the data analysis by giving visual explanations of contributions of the features to a particular decision (success or failure).
In summary, we are using a human-centered design approach to engage faculty and advisors in the development of an interactive knowledge discovery tool for better understanding of student success and students at risk. The preliminary findings provide promising feedback showing how the users were given the opportunity to get directly involved with the analytics and interpretation of results. Our approach allows faculty and advisors to interact with the data used in an analytic model, the results of the analysis, and the story of individual students.
References:
[1] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). ACM
[2] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Advances in neural information processing systems (pp. 4765-4774).Keywords:
Explainable AI, Knowledge Discovery, Domain Experts.