DIGITAL LIBRARY
CROWD-SOURCING ASSESSMENTS IN A DEVELOPING COUNTRY: WHY DO TEACHERS CONTRIBUTE?
1 American University of Sharjah (UNITED ARAB EMIRATES)
2 Teletaleem (PAKISTAN)
About this paper:
Appears in: EDULEARN13 Proceedings
Publication year: 2013
Pages: 1-10
ISBN: 978-84-616-3822-2
ISSN: 2340-1117
Conference name: 5th International Conference on Education and New Learning Technologies
Dates: 1-3 July, 2013
Location: Barcelona, Spain
Abstract:
The overwhelming success of Wikipedia has tempted many researchers to venture into applying crowd-sourcing in a variety of domains, and learning is no exception. However, not all crowd-sourcing projects are as successful as Wikipedia; most projects fail. Failures to recreate the success of Wikipedia in other mass collaboration projects can primarily be attributed to unwillingness on part of contributors to share quality knowledge which eventually leads to non-use by end-users of such knowledge. Prior research has shown that simple remedies like fiscal compensation do not work; the contributors of knowledge are often driven by intrinsic motivational factors like ego. Crowd-sourcing has been proposed for collecting assessments or test questions in developing countries where quality assessments are not readily available because textbooks are often poorly written, and teachers are generally ill-trained to make quality questions themselves. This paper presents the results of an empirical study to determine the factors behind why teachers in a developing country would contribute questions to a crowd-sourcing system based on learning outcomes stated in the national curriculum. A prototype crowd-sourcing application based on Media Wiki (the same platform as Wikipedia) and the QTI assessment standard was implemented and deployed in a developing country. Over 100 Grade VI Math teachers in both public and private schools agreed to participate in the study which lasted 6 months. A teacher could login and contribute multiple-choice questions for any of the topics in the national curriculum. Longitudinal data on the voluntary contribution behavior of these teachers was collected. The teachers contributed a total of about 1000 questions to the Wiki over a six month period. A survey instrument based on a host of factors known to impact contribution behavior like individual characteristics (lead user characteristics and persistence), individual motivation (altruism, empathy and reputation), social capital factors (trust, identification, and reciprocity), and adoption factors (performance expectation, compatibility, attitude towards technology, self-efficacy, facilitating conditions, and social influence) was designed and deployed to determine the reasons behind the contribution and the intent to contribute to such a crowd-sourcing system. Not surprisingly, the predicted “long tail” effect was also observed where an asymmetric coverage of the curriculum resulted based on the interests of the contributors. In addition, only 20 teachers actively participated in contributing questions. The quality of the contributed questions was also analyzed to determine interesting patterns. Finally, the paper also presents recommendations for the subsequent full-scale deployment of such a system based on insights gained from this study.
Keywords:
Developing world, crowd-sourcing, adoption models, assessment.