DIGITAL LIBRARY
SCORING MODELS FOR PEER ASSESSMENT IN TEAM-BASED LEARNING PROJECTS
Research Institute SQUIRE (GERMANY)
About this paper:
Appears in: EDULEARN19 Proceedings
Publication year: 2019
Pages: 9704-9713
ISBN: 978-84-09-12031-4
ISSN: 2340-1117
doi: 10.21125/edulearn.2019.2424
Conference name: 11th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2019
Location: Palma, Spain
Abstract:
Assessing team-based assignments isn’t trivial, and can become time-consuming when classes grow, in number and size. A critical issue is how to move from an overall group score based on project outcome to separate student scores. One reason is the difficulty for tutors to “peek into” the learning group during project work to identify what, how much and how well each member of the group contributes to teamwork and to project output. Even if we have specified and measured the learning objectives accurately, it is not trivial to provide a valid way of merging the overall assessment of project achievement (i.e. outcomes) with the appraisal of each single student’s contributions to the team process itself.

Because of the importance of a reliable measurement framework and approach for a valid and fair summative evaluation of each student’s learning achievement during group work, we started a long-time research project in 2007. Our goal is to clarify the concepts, principles and methods of team-based assessment. It turns out that there is not one solution to the above measurement problem (the Team-Mate-Dilemma), but a large variety of distinct approaches based on different assumptions about how best to assess students involved in group-work.

Peer assessment is one approach to team-based assessment which got a lot of attention over the last decades. It assumes that lecturers are best qualified to score the output of group work while students are best qualified to rate their peers’ contribution and engagement during group work. In this paper, we will not deal with the empirical question of how well this assumption has been confirmed in practice. Instead, our goal is to move beyond the traditional (additive and multiplicative) scoring rules for peer assessment to new types of scoring models which avoid the pitfalls of earlier approaches (e.g. producing invalid scores) and offer many advantages (e.g., scoring rules are derived from first principles).

We have singled out two scoring models which are simple to explain but show also important differences. Here, team (t) and student (s) scores are numbers between 0 and 1, i.e. percentages, e.g. s = .70 = 70%.

In the first model, peer ratings will be positive numbers, with/out upper bound, e.g. numbers from 0 to 5 as often used for a Likert scale. The scoring rule is s = r ⊳ t = 1 – (1–t)^r, pronounced “rating r applied to score s”, where r ⊳ t moves from 0 to (almost) 1 and equals t if r=1 (no adjustment). Adjusted scores can be added, multiplied with a scalar and averaged. However, this model doesn’t constrain adjusted scores to a small region around the team score.

If that is important, e.g. when using a bipolar Likert scale for peer rating, there’s another scoring model which looks somewhat similar but behaves quite differently: s = r ⊳ t = 1 – (1-t)*(1-r*t), where r runs from -1 to +1. This type of adjusted scores r ⊳ t move between t^r and 1-(1-t)^2, which is a symmetric range around the team score t = 0 ⊳ t. Again, the scores can be added and multiplied by a scalar so that the quasi-geometric mean is well-defined. The second model is appropriate when team score and peer ratings strongly depend on each other: the peer ratings for each student are thought of as positive or negative adjustments of lecturer’s score without which they would have no meaning at all (i.e., both are indicative of the same learning process).

The paper contains complete specifications / justification for both models.
Keywords:
Project Based Learning, Team Work, Peer Assessment, Assessment by Adjustment, Scoring, Rating, Scaling, Team Mate Dilemma.