DIGITAL LIBRARY
EVALUATION OF CIVIC SCIENCE EDUCATION: INFLUENCE OF THE TIME POINT OF MEASUREMENT
ZBW Leibniz Information Centre for Economics (GERMANY)
About this paper:
Appears in: EDULEARN24 Proceedings
Publication year: 2024
Pages: 1771-1779
ISBN: 978-84-09-62938-1
ISSN: 2340-1117
doi: 10.21125/edulearn.2024.0533
Conference name: 16th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2024
Location: Palma, Spain
Abstract:
Civic science education is a good way for engaging the youth to make new experiences. Thereby, the evaluation of such educational initiatives is important to assure their effectiveness and sustainability but often has to face the challenge of practical feasibility. An easy way of evaluation is the use of an online survey immediately at the end of an educational initiative when the participating children are still present and motivated. On the one hand, this might be the best time point for capturing a snapshot of the children’s enthusiasm, and has the practical advantage that the children’s willingness to fill in a survey on their experiences is probably relatively high. However, on the other hand, this time point of measurement might be inappropriate because the short-term excitement of the children (especially if they are still in close interaction with their peers and scientists) might bias the data and thus, affect the validity of the results on the long-term benefits of the educational initiative.

Beyond this background, the presented study investigated the influence of the time point of evaluation measurements. Thereby a school competition served as practical example of civic science education. During this school competition, the children worked in small groups on their own solution ideas on economic, social, and environmental challenges of the future. Scientific experts supported the children’s teamwork. Half of the children received immediately at the end of the school competition an email-invitation to fill in an online evaluation survey (immediately-group), and the other half of the children received the same email-invitation with a delay of four weeks (delay-group). The evaluation survey included a global judgement as well as detailed ratings of the single elements of the school competition (e.g., teamwork). Further, individual values of participation (e.g., broadens horizon, benefit for the later professional career) were measured.

The results of the 207 interviews showed that the response rate of the immediately-group was much higher than the response rate of the delay-group. Both groups gave an equally high global rating of the school competition. Similar, most of the single elements and individual values were rated on an equal level. However, some core elements and individual values received significantly lower ratings by the delay-group. Namely, teamwork, interaction with scientists, working on the own solution ideas, benefits for society, and benefits for the knowledge transfer between school and science were rated less positive after some delay. Overall, the results indicated that the excitement of the children at the end of the school competition and social desirability caused partly a positive bias in the immediately-group.

The findings illustrate the trade-off between practicability and validity. The equally high global ratings in both conditions and the lower response rate in the delay-group indicate that measurements immediately at the end of an initiative have a better practical feasibility and provide a solid holistic judgement. However, delayed measurements deliver a more valid and less emotional picture of the specific elements and individual values of an educational initiative. Thus, a delayed evaluation seems to be more appropriate as basis for future improvements and as measurement of the long-term benefits of civic science education.
Keywords:
Evaluation, methodology, time point of measurement, civic science education, informal learning, science popularization.