DIGITAL LIBRARY
AI AS A LEARNING PARTNER: AUTHENTIC ASSESSMENT DESIGN FOR REAL WORLD DECISION MAKING IN A TIME BOUND BUSINESS CRISIS SIMULATION
University of Birmingham (UNITED KINGDOM)
About this paper:
Appears in: INTED2026 Proceedings
Publication year: 2026
Article: 2307 (abstract only)
ISBN: 978-84-09-82385-7
ISSN: 2340-1079
doi: 10.21125/inted.2026.2307
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
Generative AI is reshaping higher education and transforming the skills expected of graduates. Employers now place increasing value on digital fluency, AI literacy and analytical decision making, with the World Economic Forum’s Future of Jobs Report 2023 identifying analytical thinking, AI and big data competencies, and creativity as among the most in-demand skills across global industries. OECD (2023) similarly highlights the need for graduates who can demonstrate ethical reasoning, evaluate evidence critically and operate confidently in digitally mediated environments. Yet recent studies demonstrate that students frequently accept AI-generated content without questioning accuracy or credibility, revealing a significant gap in evaluative judgement and fact-checking habits (Kasneci et al., 2023; Stöhr et al., 2024).

This contribution introduces an authentic assessment designed for a large final-year business module that directly addresses this challenge by positioning AI as a learning partner within a time bound business crisis simulation. Students receive a real world crisis scenario and have 48 hours to produce a consultancy-style response. They begin by generating an initial draft using a generative AI tool, then systematically evaluate its limitations, for example inaccurate evidence, shallow analysis and misalignment with business principles - issues well documented in recent AI education literature (Garzón et al., 2025; Salinas-Navarro et al., 2024). Whilst under time pressure, students refine the response using credible academic sources, industry insights and their own strategic reasoning, as well as complete a reflective section within the consultancy-style report that evaluates their professional decision-making processes and the capabilities and limitations of AI.

The design is grounded in evidence that authentic assessment enhances transferability, professional confidence and real world competence (Villarroel et al., 2018), while crisis simulation supports the development of applied reasoning under pressure (Coombs & Holladay, 2022). By integrating AI transparently rather than prohibiting it, the assessment supports the development of critical AI literacy - now recognised as a core graduate capability across business and management education (Dai & Lin, 2024), and strengthens students’ ability to question, critique and improve AI output rather than rely on it uncritically.

Findings from the implementation will be shared, including student perceptions of this assessment compared to the traditional essay. Early insights suggest that students become more engaged, more aware of AI’s fallibility and more capable of identifying credible evidence when asked to critique, rather than simply use, AI output. This contribution will also outline practical ideas for implementation, common pitfalls, and key lessons learned for educators seeking to embed AI meaningfully and responsibly into assessment design.

For the INTED audience, this contribution offers a replicable model for integrating AI into assessment without compromising academic rigour, alongside actionable recommendations for preparing graduates to make informed, ethical and creative decisions in AI-enabled professional environments.
Keywords:
Generative AI, authentic assessment, AI literacy, business education.