DIGITAL LIBRARY
HARNESSING NATURAL LANGUAGE PROCESSING TO SUPPORT CURRICULUM ANALYSIS
The University of British Columbia (CANADA)
About this paper:
Appears in: ICERI2020 Proceedings
Publication year: 2020
Pages: 1779-1784
ISBN: 978-84-09-24232-0
ISSN: 2340-1095
doi: 10.21125/iceri.2020.0445
Conference name: 13th annual International Conference of Education, Research and Innovation
Dates: 9-10 November, 2020
Location: Online Conference
Abstract:
In this paper we describe an innovative pilot implementation of a natural language processing (NLP) tool to support analysis of curriculum. ‘Curriculum’ refers not only to course content, but also to the processes by which learners achieve learning goals, to educator activity that supports learning, and to the context in which teaching and learning occurs. But how do we know whether our curriculum effectively represents current thinking in our field? How can we identify gaps in curriculum, uncover unhelpful redundancies between courses, or discover which elements of curriculum may be out of date, or missing? In higher education, curriculum analysis and mapping processes are now routinely employed to answer such questions, and evaluate and improve the scope and quality of curricula.

A variety of frameworks have been proposed to guide such processes. These typically direct reviewers to read course syllabi closely and manually ‘map’ the ways that course elements contribute to program outcomes. Not only is this slow and laborious work, however, but the quality, accuracy and currency of syllabi may be variable or limited. Syllabi documents typically outline course goals, and summarize themes and learning materials at a high level, offering only a thin overview of curriculum.

It has been argued, on the other hand, that the richest source of deep understanding of student learning is the text generated to support teaching and learning, and higher education remains overwhelmingly textually-mediated experience. Moreover, when courses are offered in a fully online format, all elements of the curriculum – content, learning activities, educator activity, context – are predominantly constructed in text. Automated analysis of more extensive course text materials – and even complete course content – might therefore support faster, more nuanced and more systematic analysis of curriculum.

To investigate and test this possibility, we have built a proof-of-concept tool called CRAgent to harness the power of natural language processing (NLP) for curriculum analysis. NLP draws on insights from the field of computational linguistics and offers increasingly accurate approaches to analysing human language. NLP algorithms allow rapid automated analysis of large volumes of text. Our CRAgent tool combines the NLP power of IBM Watson (which distills concepts and categories from text) with the sophisticated and rapid search processes provided by Elasticsearch. CRAgent allows users to rapidly search course and program content for themes and concepts, beyond simple word matches, to allow more sophisticated mapping of curriculum with stated course and program goals.

In this paper, we report on a pilot analysis of the text content of 17 online courses in our fully online master’s program. We evaluate the potential for CRAgent to support curriculum mapping and review processes, and demonstrate that CRAgent can rapidly identify mismatches between stated course outcomes and actual course content, guiding further focussed review and potential curriculum redesign. We also discuss future work with CRAgent that will enhance additional aspects of curriculum review.
Keywords:
Curriculum analysis, curriculum mapping, learning analytics, text analysis, learning design, natural language processing, NLP.