DIGITAL LIBRARY
ENHANCING PEER REVIEW THROUGH AI: CHATGPT VERSUS HUMAN EXPERTISE IN ASSESSING SINGLE-CASE EXPERIMENTAL RESEARCH STUDIES
University of North Carolina Greensboro (UNITED STATES) / Ondokuz Mayis University (TURKEY)
About this paper:
Appears in: EDULEARN24 Proceedings
Publication year: 2024
Page: 8642 (abstract only)
ISBN: 978-84-09-62938-1
ISSN: 2340-1117
doi: 10.21125/edulearn.2024.2080
Conference name: 16th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2024
Location: Palma, Spain
Abstract:
Peer review stands as a cornerstone in the validation of research findings and the maintenance of academic excellence. Despite its vital role, it faces challenges such as reviewer reluctance, variable review durations, and prolonged publication decision timelines. The integration of Artificial Intelligence (AI), particularly ChatGPT, offers promise in augmenting the peer review process by enhancing efficiency and objectivity.

In this study, we conducted a comprehensive comparison between peer reviews conducted by human experts and ChatGPT for 18 single-case research design (SCRD) manuscripts in the fields of special education and psychology. Both human reviewers and ChatGPT were meticulously evaluated for their concordance in manuscript quality assessments and publication decisions.

Our findings revealed a significant level of agreement between human reviewers and ChatGPT in assessing the quality of the manuscripts, suggesting that ChatGPT can effectively assist in this aspect when guided by structured rubrics. However, there was a noticeable discrepancy in the recommendations for publication decisions. This discrepancy underscores the nuanced nature of publication decisions, influenced by factors such as subjectivity, domain expertise, and contextual understanding.

This study highlights the importance of adopting a balanced approach to leverage the strengths of AI while respecting human expertise in peer review practices. While ChatGPT shows promise in enhancing the efficiency and objectivity of manuscript evaluations, its integration into the peer review process must be carefully guided by structured rubrics and guidelines. Additionally, it is crucial to recognize the indispensable role of human judgment and expertise in making nuanced publication decisions.

The implications of these findings extend to practice, suggesting the need for refining AI-assisted peer review processes and integrating them into existing practices effectively. Furthermore, recommendations for future research focus on exploring innovative approaches to enhance the collaboration between AI and human reviewers in the peer review process, ultimately advancing academic excellence and knowledge dissemination.
Keywords:
Peer review, Artificial Intelligence (AI), Manuscript evaluation, Single-case research design (SCRD), ChatGPT.