AUTOMATED SCORING OF EFL LEARNERS’ WRITTEN PERFORMANCE: A TORTURE OR A BLESSING?
1 Islamic Azad University, Tehran North Branch (IRAN)
2 Islamic Azad University, Tehran South Branch (IRAN)
About this paper:
Appears in:
EDULEARN14 Proceedings
Publication year: 2014
Pages: 5146-5155
ISBN: 978-84-617-0557-3
ISSN: 2340-1117
Conference name: 6th International Conference on Education and New Learning Technologies
Dates: 7-9 July, 2014
Location: Barcelona, Spain
Abstract:
Automated Writing Evaluation (AWE) systems, which score essays and generate feedback, have been developed to meet the challenge of evaluating learners’ written performance. With the growing use of commercial AWE products and their subsequent educational impacts, the question of their usefulness has become ever more relevant. However, few independent studies have been conducted on AWE’s instructional usefulness and they have provided conflicting findings. Thus, this study compared the effects of automated and human scoring with each other on the writing ability of EFL learners. The participants were 22 advanced EFL learners equally divided into two experimental and control groups. The 11-week treatment period commenced with an essay writing pre-test to ensure their homogeneity in terms of writing ability. Both groups were taught by the same teacher using the same materials; however, the experimental group used the AWE program My Access! as the scorer of their essays. At the end of the treatment, the students received an essay writing post-test to evaluate their level of progress in terms of the writing skill. A questionnaire was also given to the experimental group to learn about their attitude towards AWE. Lastly, the students took a delayed, timed paper-pencil essay writing post-test to investigate the impact of the treatment in a formal test situation. The results revealed that the experimental group had significantly outperformed the control group on the post-test, although there was no significant difference between the groups’ mean essay lengths. Similarly, the performance of the two groups on the delayed timed post-test was nearly identical. It was also decided that attitudes of the students were generally positive towards AWE. In sum, it was concluded that AWE was effective in prompting more revisions and boosting learners’ writing confidence. It could also help struggling writers to reduce sentence-level errors. AWE did not, however, lead to longer essays or improved macro-level features. Finally, AWE could not replace the teacher and lead learners towards autonomy, as it lacked the human interaction essential to writing assessment. Thus, EFL writers can best benefit from it as a preliminary assessor before they submit their work for teacher assessment. However, even as a supplement, AWE’s success seems to depend on the educational context, goals, and constraints.
***This is an extended, modified version of a paper presented at the international conference “ICT for Language Learning” 6th Edition in Florence, Italy in November 2013***