DIGITAL LIBRARY
LOW STAKES, HIGH REWARD: DEVELOPMENT AND EVALUATION OF AN ONLINE COMPARATIVE JUDGEMENT MARKING SYSTEM
University of Liverpool (UNITED KINGDOM)
About this paper:
Appears in: INTED2024 Proceedings
Publication year: 2024
Pages: 5888-5892
ISBN: 978-84-09-59215-9
ISSN: 2340-1079
doi: 10.21125/inted.2024.1541
Conference name: 18th International Technology, Education and Development Conference
Dates: 4-6 March, 2024
Location: Valencia, Spain
Abstract:
In the ever-evolving landscape of education, the quest to find more effective, reliable, and efficient methods of assessing students' knowledge and providing constructive feedback has been ceaseless. Over 70,000 articles and 6,000 books have been published on this subject in the last decade alone. In addition, there has been much debate regarding grade reliability, for at least the last 100 years (Ashbaugh, 1924). In response, a transformative approach known as Comparative Judgement (CJ) is revolutionising the way educators mark assignments and deliver feedback, offering a dynamic alternative to conventional grading practices (Bartholomew, S.R., et al, 2019 and Potter, T. et al.,2017). Rooted in educational theory and empowered by advancements in technology, CJ reimagines the assessment process by embracing the principles of comparison, fairness, and objectivity. The following narrative summarises a formal research project on CJ, which has University research ethical approval (11092).

The primary research aim is to develop, integrate, and implement a peer learning, assessment, and feedback tool, which allows for assessment judgements to be made in a ‘low stakes’, and efficient approach on undergraduate modules delivered at the University of Liverpool, UK. The first stage will begin with the development and integration of the ‘CJ Application' into CANVAS, the University’s virtual learning environment (VLE). Following integration, the application will be used as a learning, assessment, and feedback tool. Staff and students from a range of modules in the School of Life Sciences (SoLS) will have the opportunity to be involved in the project.

Both students and staff will use the initial version of the ‘CJ application’, which will generate a vast collection of feedback and a 'rank' of appraised poster submissions. These posters will also be marked independently utilising the ‘traditional’ approach and comparisons/ analysis will be made between ‘traditional’, CJ student, and CJ staff ranks.

Additionally, staff and student focus groups will be facilitated to evaluate the appraisal and marking processes. Full and informed participant consent will be sought for all study elements in alignment with university policy. The research team will also analyse anonymised data from the EVAsys module evaluation data; access analytics data from CANVAS (VLE), and comparison of the assessment component marks. These records are collected as part of the University’s standard procedures for module assessment and evaluation.

All data from CANVAS, the focus groups and student module evaluations ‘EVAsys’ will be used to evaluate the CJ application for further development, promoting discussion about potential future CJ utility and dissemination. The risks, challenges and enablers experienced during the research will be discussed, as will the potential for wider ‘roll out’ across the university and HE sectors.

References:
[1] Ashbaugh, E. (1924). Reducing the variability of teachers’ marks. Journal of Educational Research, 9, 185–198.
[2] Bartholomew, S. et al (2019) ‘Using adaptive comparative judgment for student formative feedback and learning during a middle school design project’, International journal of technology and design education, 29(2), pp. 363–385.
[3] Potter, T. et al. (2017) ‘ComPAIR: A New Online Tool Using Adaptive Comparative Judgement to Support Learning with Peer Feedback’, Teaching and learning inquiry, 5(2), pp. 89–113.
Keywords:
Comparative judgement, grade reliability.