DOES A MINIMAL LLM ASSISTANT HELP STUDENTS START? A PILOT EVALUATION OF AN LLM-BASED IDEATION TOOL
University of Koblenz, Institute for Web Science and Technologies (WeST) (GERMANY)
About this paper:
Conference name: 20th International Technology, Education and Development Conference
Dates: 2-4 March, 2026
Location: Valencia, Spain
Abstract:
In project-based data-science courses, the most fragile point is often the very beginning: students must turn a broad interest into a focused, feasible project idea yet frequently stall at this “first mile” of scoping. Existing work on AI in education mostly addresses tutoring, assessment, or writing support and rarely examines how large language models (LLMs) might help at this earliest stage, especially in data-intensive programmes. This paper reports a pilot evaluation of a minimal LLM-assisted ideation tool designed to lower that initial barrier via a calm, text-first interface, a small set of concise “idea cards” (idea, rationale, first steps), and quick, reversible edits so that students can steer suggestions without losing momentum.
We address two research questions:
(RQ1) How do students perceive the usefulness, clarity, and creativity support of this LLM-assisted ideation tool compared with their familiar brainstorming routines?
(RQ2) Which experience factors predict satisfaction and willingness to reuse the tool in future project work?
The study recruited data-science students at a German university; 23 used the web application and 16 completed a post-use survey (≈70% completion). The instrument comprised seven five-point items on navigation, input clarity, satisfaction, engagement, relevance, creativity, and motivation, plus comparative items against students’ usual methods and a single reuse-intention item.
Internal consistency of the experience scale was high (Cronbach’s α = 0.89). Ratings lay clearly above the neutral midpoint: students found the interface easy to navigate and inputs easy to understand, and they reported that the tool supported creative thinking, matched their interests, and increased motivation to pursue a data-science project. Comparative ratings showed advantages over familiar brainstorming, strongest for ease of use and perceived creativity, with additional gains for engagement, project relevance, and collaboration potential. A majority indicated that they would use the tool again, with the remainder answering “maybe”.
Correlation analyses highlight clarity and perceived relevance as central levers: satisfaction increased when students felt they understood how to steer the tool and when generated ideas aligned with their goals; creativity judgements moved in step with these factors. Reuse intention tracked overall satisfaction and comparative engagement and creativity. Experience composites were comparable across self-reported beginner, intermediate, and advanced students, suggesting that the design can serve mixed-level cohorts.
We interpret these perception-level results as initial evidence that a restrained, legible LLM interface can reduce early scoping friction in data-science education. Rather than replacing human judgment or instructor guidance, the tool functions as a lightweight “first-mile” companion that helps students cross the threshold from vague interest to a small set of discussable project directions and motivates further work.Keywords:
Large language models (LLMs), Project-based learning, Data science education, Ideation and creativity support, Human–AI interaction, Student motivation, Engagement.