DIGITAL LIBRARY
IN THE FINE PRINT: INVESTIGATING EDTECH PROVIDERS’ DATA PRIVACY COMMITMENT - TOOLS FOR SCHOOLS
1 London School of Economics and Political Science (UNITED KINGDOM)
2 University of Vienna (AUSTRIA)
3 TOSDR (UNITED STATES)
4 Cyril and Methodius University (MACEDONIA)
About this paper:
Appears in: ICERI2024 Proceedings
Publication year: 2024
Pages: 6185-6194
ISBN: 978-84-09-63010-3
ISSN: 2340-1095
doi: 10.21125/iceri.2024.1501
Conference name: 17th annual International Conference of Education, Research and Innovation
Dates: 11-13 November, 2024
Location: Seville, Spain
Abstract:
In the evolving educational technology (edtech) landscape, quality assessment processes are integral to education governance and ensuring quality at all educational levels. Transparency in data processing provided to users and adherence to privacy laws by edtech providers have become critical concerns for building trust with education stakeholders. This study explores data protection practices of selected edtech providers using an innovative mixed-method approach combining manual assessments with Machine Learning techniques.

Our research focuses on:
(1) empirical analysis of the transparency and legality of the information shared with schools by providers based on the articulation of their data privacy policies (DPPs)
(2) methodological exploration integrating human and ML-based analyses.

These components scrutinize how edtech providers communicate their data processing practices to schools and comply with privacy regulations such as the General Data Protection Regulation (GDPR) and age-appropriate standards, outlined in their DPPs. These practices are crucial for building trust between schools and edtech providers and for updating relevant government policies that address the challenges of digitizing education (evidencing recent unethical and illegal data practices).

Our motivation stems from the statutory requirements schools must meet to ensure they integrate quality edtech products into their operations. Conducting Data Privacy Impact Assessments and evaluating providers’ DPPs, as part of procurement, while protecting students’ basic rights, is costly, labor-intensive and requires expertise beyond pedagogy. Hence, our research focuses on seeking to develop a non-expert template that can streamline the initial assessments of DPPs and evaluate a provider’s transparency towards users; and, test innovative technologies to scale this demanding process effectively and efficiently.

Initial findings were derived from the ML-supported assessment of 10 popular edtech providers’ DPPs. These findings highlight varying degrees of transparency and compliance with data protection requirements concerning data processing information for end-users. They also elucidate whether current ML techniques such as OpenAI’s chatGPT ensure reliable automated assessments or produce untrustworthy results.

Our methodology evaluates the clarity and comprehensibility of DPPs through manual scrutiny and leverages ML techniques for analysis of large datasets. It identifies current errors associated with ML applications in this context. This dual approach enhances the robustness and scalability of our evaluation framework, offering insights on how future assessments of edtech could be standardized and automated.

The study contributes to discussions at the intersection of education, technology, ethics, policy and governance, offering actionable insights for education stakeholders in navigating the complexities of data privacy regulation and promoting responsible edtech innovation.

Our findings and methodology contribute to global discourse in education and research by addressing the datafication of education and the application of AI in legal and ethical assessment practices. We advocate for ethical edtech that not only enhances educational outcomes but also prioritizes transparency, legality, and ethical integrity, with assessment of ML tools which could support and facilitate schools’ procurement and assessment processes.
Keywords:
Edtech Governance, Data Privacy, Human Rights, Machine Learning.