DIGITAL LIBRARY
BRIDGING PRACTICE, PEDAGOGY, AND POLICY: EMBEDDING RESPONSIBLE AI IN MEDICAL EDUCATION
KTPH (SINGAPORE)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 1766 (abstract only)
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.1766
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
Background:
Generative artificial intelligence (AI) is rapidly transforming higher education and healthcare. However, the integration of AI within medical education has often been fragmented—focused on tools and technology rather than ethical literacy, professional accountability, and governance structures. In 2025, RCSI & UCD Malaysia Campus (RUMC) conducted a longitudinal faculty development programme comprising six AI workshops designed to build institutional readiness and ethical competency in AI adoption.

Objective:
This study aimed to evaluate and synthesise outcomes from the 2025 AI Workshop Series to develop an integrated framework linking three domains of responsible AI adoption in medical education: practice (using Shared Evidence Hubs), pedagogy (embedding ethical AI competencies), and policy (institutional governance for responsible innovation).

Methods:
A mixed-methods design was used, combining quantitative feedback from 101 participants with thematic analysis of qualitative responses. Faculty, administrative staff, and clinicians participated in scenario-based learning, tool-specific workshops, and reflective synthesis clinics. Key modules included AI-HP (AI for Healthcare Professionals), AI-LD (AI for Learning Design), and AI-LEAP (AI for Administrative Productivity). Thematic synthesis identified enablers, barriers, and emergent frameworks across practice, pedagogy, and policy levels.

Results:
Quantitative data indicated strong acceptance, with mean ratings above 4.3/5 for quality, usefulness, and facilitator effectiveness. Over 90% of participants found the content very or extremely relevant, and 93% reported high intention to apply AI in their work. Qualitative findings revealed three major outcomes:
(1) a paradigm shift from perceiving AI as a threat to viewing it as a collaborative partner;
(2) improved efficiency and accuracy in academic writing, curriculum design, and research synthesis through AI tools like NotebookLM; and
(3) increased ethical awareness and calls for formal governance frameworks. Collectively, these findings informed RUMC’s development of a three-tiered Responsible AI Integration Model encompassing Shared Evidence Hubs, the AI Ethics Competency Framework, and the Responsible AI Governance Framework.

Discussion:
The RUMC model demonstrates how structured professional development can bridge fragmented AI experimentation into an integrated institutional strategy. By aligning AI use with ethics, pedagogy, and governance, the approach supports sustainable adoption across clinical, academic, and administrative domains. The findings suggest that cultural readiness and ethical training are as critical as technological capacity in achieving responsible digital transformation.

Conclusion:
Embedding responsible AI into medical education requires a systemic approach that connects daily academic practice, curriculum design, and policy oversight. RUMC’s model—rooted in practical workshops and participatory design—offers a scalable framework for integrating AI across higher education institutions globally. The outcomes align with INTED2026’s focus on educational innovation, digital ethics, and institutional capacity building.
Keywords:
Responsible AI, medical education, ethics, governance, faculty development, generative AI, NotebookLM.