DIGITAL LIBRARY
GROUP-PEER-ASSESSMENT FROM A QUANTITATIVE POINT OF VIEW
1 Edumetrics R&D (GERMANY)
2 Department of Computing, University of Northampton (UNITED KINGDOM)
About this paper:
Appears in: EDULEARN23 Proceedings
Publication year: 2023
Pages: 6979-6989
ISBN: 978-84-09-52151-7
ISSN: 2340-1117
doi: 10.21125/edulearn.2023.1831
Conference name: 15th International Conference on Education and New Learning Technologies
Dates: 3-5 July, 2023
Location: Palma, Spain
Abstract:
Group-Peer-Assessment (GPA) is complicated and demanding for both instructors and students. It is so in three respects: (a) procedural, (b) qualitative, and (c) quantitative. In the past 2-3 decades, much has been written about (a) and (b), almost nothing about (c). This may be due to a mistaken belief among practitioners that there is no need for a thorough analysis of what it means, and takes, to assess groups and individuals in a fair and consistent way. The few publications that are aimed at quantitative aspects of GPA, do not try to give a full explanation of why GPA is as complicated and demanding as it is. In this paper we set forth to fill this gap. We present a solid framework for all quantitative aspects of GPA, thereby strengthening its qualitative aspects and enabling streamlined GPA procedures. We will show how the tricky problems of GPA can be solved by adopting the concepts and constructs of bounded scale types. Without such a conceptual and constructive quantitative framework, the practice of GPA would remain adhockery and prone to criticisms of being unreliable, invalid, and biased.

From a measurement perspective, what is it that makes GPA so tricky? It is the fact, that a solution of GPA requires the precise tuning / alignment of three measurement tasks:
(Task 1) assessment of a group’s outcome in terms of the mean percentage score on a list of product quality criteria;
(Task 2) assessment by the group members of each other’s contribution to the group’s products in terms of group dynamics factors; this involves calculating a mean percentage rating twice;
(Task 3) combining the group score (1) with the student ratings (2) to get individual student scores using a so-called scoring rule.

Task 1 largely resembles traditional criteria-based assessment. However, it should be noted that at the end a percentage shall be reported. Thus, if n-point scales or alphanumeric grading scales are used, the scores or grades must be converted to percentages and the mean must be calculated in accordance with the calculus of bounded scales.

Task 2 is arguably the most intricate and least known to instructors, as it involves asking (requiring) students to assess their peers on a set of criteria regarding the work processes in their team (collaboration, contribution, …). As the instructor is not deeply involved in the group’s work processes, he cannot assess the students on this didactic level. To reduce students’ workload, they are usually given process criteria associated with n-point scales. Students’ ratings, however, must be converted to the signed percentage scale (±%) for inclusion in the scoring rule.

Task 3 will pose little problems, provided that the foregoing tasks were successfully completed. In Task 1, the instructor has calculated a group score (%), he must now also specify a group spread (%). Together, group score and spread determine a subrange of the percentage scale, called the constrained percentage scale, in which the final student scores will fall. Using a scoring rule, the group score (%) will be merged with the student ratings (±%) to calculate the final student scores. Finally, the instructor should check the correctness of calculations by comparing the mean of final student scores with the group score. According to the Split-Join-Invariance property of our GPA scoring model, the mean student score shall be equal to the group score.
Keywords:
Group score, group spread, student rating, student score, split-join-invariance, percentage scale, signed percentage scale, constrained percentage score.