National Research Council - Institute for Educational Technology (ITALY)
About this paper:
Appears in: ICERI2019 Proceedings
Publication year: 2019
Pages: 2076-2085
ISBN: 978-84-09-14755-7
ISSN: 2340-1095
doi: 10.21125/iceri.2019.0577
Conference name: 12th annual International Conference of Education, Research and Innovation
Dates: 11-13 November, 2019
Location: Seville, Spain
Providing meaningful feedback in MOOCs is a hard job, especially in fields when there is no right or wrong answer, because participants need a feedback that triggers reflection on the course content to work out their personal position with respect to complex matters but manual personalised feedback from tutors is not feasible. However, according to Self-Regulated Learning (SRL) theory, people can also learn by comparing their work or beliefs to those of their peers and this can help achieve a reasonable cost-benefit ratio.

In some cases, this is relatively easy to accomplish: peer review of participants’ solutions to problems can be a powerful way to provide each course participant with in depth feedback, and this approach produces learning gains for both the person who produces the feedback and the one who receives it. However, this kind of strategy does not necessarily work in all cases. There are many situations where feedback should serve the purpose of fostering reflection and helping learners to position themselves critically with respect to content. In these areas, discussion among students can be beneficial. However, when large cohorts of course participants are involved, discussion can be chaotic and following the different threads can be discouraging, dispersive and time consuming, for both participants and tutors. In addition, in some MOOCs, people do not participate simultaneously. Rather, they enrol in the course at different times and work through it with their own pace and modality. This is frequently the case when participants are adult, self-regulated professionals.

In this paper, we present the case of in-service teachers participating into a MOOC on Learning Design (LD), with special focus on how to develop students’ SRL. In order to foster in-service teachers’ reflection on these subjects, the course entailed a hands-on design activity with a LD tool and the compilation of three questionnaires, at different stages of the course.

The three questionnaires concerned participants’ approach to LD, their personal SRL strategies and their approach towards fostering SRL in students. The proposed questions did not envisage “right vs. wrong answers”, but intended to foster teachers’ reflections about the course contents.

Participants use of the LD system was automatically tracked, in order to produce evidence of use of the systems’ functions by each user.

To foster participants’ self- reflection on course content, the system provided them with an automatic feedback positioning their own replies to the questionnaires and their use behaviours of the LD system in respect to descriptive statistics of the replies (and usage) of their peers. Participants’ (N=66) were then asked to rate the usefulness of this feedback on a scale from 1 to 5 (1=useless, 5=very useful). Two of them were later interviewed to better understand the value of this kind of feedback. Participants’ rating of this feedback appears to be quite positive (mean=4, SD<=1 for beliefs comparison; mean=4.2; SD=1 for comparison of behaviours). These results and those of the interviews reveal that the comparison with colleagues opinions and practice stimulated critical thinking on their LD and SRL strategies as desired. The results are analysed and discussed in detail in the paper.
Self-Regulated Learning, Learning Analytics, automatic feedback.