DIGITAL LIBRARY
TERMS & CONDITIONS APPLY: PRIVACY, ETHICS, AND ACCOUNTABILITY IN AI-DRIVEN HIGHER EDUCATION
Marist University (UNITED STATES)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 0741
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.0741
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
As artificial intelligence (AI) accelerates, higher education has emerged as a consequential domain in which its impacts unfold. Universities are sites of innovation, regulation, experimentation, and risk, making them critical testbeds for understanding how AI reshapes teaching, learning, privacy, and governance. While institutions have relied on legal frameworks such as the Family Educational Rights and Privacy Act (FERPA) to safeguard student information, these laws were drafted in an era that pre-dates generative AI (GenAI) and provide incomplete guidance for managing the scale, opacity, and dynamic nature of data practices. The resulting gap between rapidly evolving technological conditions and comparatively static regulatory structures creates challenges for institutional accountability, ethical decision-making, and student rights.

This paper examines that gap by investigating how higher education institutions in the United States are currently operationalizing AI policies with respect to privacy, ethics, and governance. Using a dataset of fifty universities, a structured coding methodology was developed to categorize institutional AI policies across four domains: governance model, privacy strength, student rights focus, and policy orientation. These categories highlight what institutions are doing and how they conceptualize responsibility, distribute authority, and articulate their ethical stance toward adoption of AI. Particular attention is given to four American institutions with notably strong privacy safeguards: Harvard University, the Massachusetts Institute of Technology, Marist University, and the University of Michigan.

Findings reveal striking inconsistencies. Most universities rely on centralized governance models, yet few specify detailed or enforceable privacy protocols. Only 8% demonstrate high-privacy policies, while nearly a quarter offer no identifiable privacy guidance at all. The majority (68%) explicitly reference student rights, but these rights are often framed through compliance rather than agency. Furthermore, despite widespread concern about academic integrity, no institution adopted a fully restrictive AI stance. Instead, 94% support conditional or regulated use, signaling a sector-wide acknowledgment that AI is both unavoidable and pedagogically valuable. These results underscore an emerging tension: institutions expect transparency and ethical use from students while offering uneven transparency and accountability in their own AI-enabled practices.

To address this tension, the current research draws on normative ethical frameworks—including deontology, utilitarianism, and principles from the Belmont Report—to argue that institutions bear a reciprocal duty of care. If students are rights-bearing individuals rather than potential violators, institutions must provide clear privacy protections, transparent data practices, and AI literacy support rather than primarily relying on policing mechanisms. This research highlights empirical insight into current national trends in AI governance, a comparative analysis of privacy and policy strengths, an ethical framework for evaluating institutional responsibilities, and practical recommendations for building accountable, transparent, and student-centered AI policies. Ultimately, this research provides a foundation for developing regulatory models that promote innovation while safeguarding student autonomy, privacy, and trust in an AI-driven academic landscape.
Keywords:
AI governance in higher education, generative AI regulation, privacy and data ethics, institutional policy, ethical frameworks for AI Adoption.