DIGITAL LIBRARY
WHEN AUTOMATIC FEEDBACK IS NOT GOOD ENOUGH
University of Education Weingarten (GERMANY)
About this paper:
Appears in: ICERI2019 Proceedings
Publication year: 2019
Pages: 11314-11320
ISBN: 978-84-09-14755-7
ISSN: 2340-1095
doi: 10.21125/iceri.2019.2809
Conference name: 12th annual International Conference of Education, Research and Innovation
Dates: 11-13 November, 2019
Location: Seville, Spain
Abstract:
Personalized learning is considered a key factor in innovative learning environments [1]. It requires that individual learning processes are documented or observed, analyzed and assessed, feedback is provided, and adequate measures are taken. To support learners with formative assessment [2], feedback tailored to the specific needs has to be fed back immediately. At university level, computer-aided assessment and automatic feedback are promising approaches to implement such formative assessment at the required large scale. In this paper, we investigate how computer-aided formative assessment can be supplemented by learning analytics to identify the adequate type of feedback required for personalized learning support. We claim that automatic feedback needs to be supplemented by additional measures such as expert feedback or individual recommendations of suitable learning resources to promote successful learning processes.

Studies on how to design effective feedback agree that feedback should be elaborated, specific and clear, as simple as possible, objective [3], and should be given on three levels: a) on the task; b) on the processing of the task; c) on self-regulation [4]. A prerequisite is that learners’ solution processes are documented comprehensively and on a detailed level, to allow for feedback not only on the final outcome of the task, but on the level of individual solution steps and overall strategy. In digital learning environments, learning analytics tools and techniques can be used to automatically record learning processes at such detail [5], to store it in a learning record store via interfaces such as xAPI for further analysis [6].

We developed several interactive learning applications in Math, designed as tutoring systems with scaffolding, (semi-)automatic assessment, and feedback [7]. Integrated learning analytics allows for recording all intermediate solutions, automatic assessment results, and semantically relevant interactions between students and learning application. Data collected in introductory Math courses at university allowed us to analyze aspects such as the interdependence between the number of feedback requests in various situations and final outcomes (success vs. no success), and identifying critical points where the probability for solving an exercise successfully drops below a given threshold. This leads us to indicators for when automatic feedback is not good enough and different levels of feedback are required.

References:
[1] H. Dumont & D. Istance, “Analysing and designing learning environments for the 21st century,” in The Nature of Learning. Using Research to Inspire Practice (H. Dumont et al., eds.), pp. 19-34, OECD Pub., 2010.
[2] P. Black & D. Wiliam, “Developing the theory of formative assessment”, Educational Assessment, Evaluation and Accountability, vol. 21, no. 1, pp. 5-31, 2009.
[3] V. J. Shute, “Focus on Formative Feedback”, Research Report, ETS, Princeton, NJ, 2007.
[4] J. Hattie & H. Timperley, “The Power of Feedback”, Review of Educ. Res., vol. 77, no. 1, pp. 81-112, 2007.
[5] S. Buckingham Shum, “Learning Analytics”, Policy Brief, UNESCO Inst. for Information Technologies in Education, 2012.
[6] ADL, “xAPI Specification. Version 1.0.3”, 2016. Retrieved from https://github.com/adlnet/xAPI-Spec
[7] P. Libbrecht, W. Müller & S. Rebholz, “Smart Learner Support Through Semi-automatic Feedback” in Smart learning environments (M. Chang & Y. Li, eds.), pp. 129-157, Springer, 2015.
Keywords:
Learning Analytics, Formative Assessment, Automatic Feedback.