DIGITAL LIBRARY
BUILDING AI CONFIDENCE IN PRESERVICE ELEMENTARY TEACHERS
Georgia Southern University (UNITED STATES)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 0206
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.0206
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
Artificial intelligence (AI) is moving from novelty to necessity in teacher education, yet many preservice teachers report uneven confidence, unclear classroom routines, and open questions about ethics and assessment. This study examines preservice elementary teachers’ readiness to integrate AI in instructionally sound, standards-aligned, and responsible ways. Guided by the Technology Acceptance Model (usefulness, ease of use, attitudes, intention) and informed by TPACK, we investigate how candidates perceive AI’s value and risks for lesson planning, differentiation, feedback, and assessment; where in the flow of instruction they envision AI adding value; and what supports they consider necessary for responsible use. We also compare candidates with and without recent field experience to see how authentic school contexts shape confidence, tool choice, and routines. Using a convergent mixed-methods design, we administered a structured survey and collected short written reflections. The survey measured attitudes, perceived usefulness and ease of use, accuracy/ethics concerns, and intention to use AI. Open-ended responses described concrete applications, common pitfalls and needed supports. Quantitative data were summarized descriptively and by experience group; qualitative data were coded thematically to identify recurring patterns in envisioned practice, decision criteria, and barriers. Findings show generally positive views of AI’s instructional utility paired with persistent concerns about content accuracy, student overreliance, privacy, and alignment with curriculum and assessment policies. Candidates with school-based experience reported more specific, workflow-level uses and greater confidence evaluating outputs; those without recent placements emphasized risks and requested concrete exemplars. Across groups, technology self-efficacy and clearly framed use cases were stronger drivers of intention than generic enthusiasm. Participants also identified practical resources that would help: vetted examples tied to standards, brief checklists for reviewing AI outputs before classroom use, and step-by-step tasks adaptable to subject, grade, and learner profile. Implications for teacher preparation include embedding hands-on, studio-style activities (prompt design, bias/accuracy checks, lesson redesign), explicitly addressing ethical and assessment considerations with concrete safeguards (authorship transparency, privacy protection, bias mitigation, accuracy verification, assessment integrity), and partnering with placement schools to align expectations and constraints (policies, devices, filters). The contribution of this study is a practical set of preparation moves ready for adoption: a one-page AI task-vetting template (learning goal → standard → AI move → safeguards → evidence of learning), a compact library of adaptable exemplars, and a PD sequence differentiated by experience level that links coursework to field practice. By centering preservice teachers’ voices and comparing those with and without field experience, the study offers actionable guidance for moving beyond one-off “AI exposure” toward confident, reflective, and ethical classroom use; limitations include reliance on self-report and a single-program sample, suggesting future work with observations, performance measures, and multi-site designs.
Keywords:
Preservice teachers, AI in education, elementary education, technology acceptance, teacher preparation.