K. Petchko1, J. Elwood2, G. O'Neill3

1National Graduate Institute for Policy Studies (JAPAN)
2Meiji University (JAPAN)
3Hitotsubashi University (JAPAN)
The recent rethinking of second-language writing assessment has led to significant changes in how writing assessment is conceptualized, designed, and implemented. There is now a greater emphasis on the notion of authenticity in test development, the use of integrated writing-from-sources tasks, as well as the instructional effects of testing, or washback. Much of the work on writing assessment, however, has been limited to ESL settings, especially in the U.S. and Australia, and to undergraduate programs. Research is lacking on the issues of authenticity, washback, and the use of integrated tasks in multinational EFL assessment contexts, particularly in graduate programs. Yet, available scholarship in writing assessment (Condon, 2013; Horowitz , 1991; Kroll & Reid, 1994; Shay, 2005) highlights the importance of the local context—of knowing the people, standards, and curriculum practices involved in writing—in the development of writing tests that are capable of furnishing useful information about test-takers. In this presentation, we describe the work we have done to develop an in-house placement test of English academic writing for a multinational student body at a graduate school in Japan, an EFL context. The development process has spanned several years as the test evolved in three stages, from the use of a traditional, commercially available, prompt-based test assessing general-purpose writing skills to an in-house integrated reading-writing test focusing on test-takers’ ability to analyze arguments, and, finally, to a locally designed, authentic, source-based assessment of disciplinary writing skills. The focus of this presentation will be on explaining the rationale behind the decisions we have made in the test development process, decisions that were motivated by three overarching considerations—considerations of test authenticity, the construct of academic writing, and washback. This presentation is expected to broaden participants’ understanding of authentic writing assessment and the challenges involved in the development of authentic tests of academic writing.

[1] Condon, W. (2013). Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings? Assessing Writing, 18, 100-108.
[2] Horowitz, D. (1991). ESL writing assessments: Contradictions and resolutions. In L. Hamp-Lyons (Ed.), Assessing second language writing in academic contexts (pp. 71-85). Norwood, NJ: Ablex.
[3] Kroll, B., & Reid, J. (1994). Guidelines for designing writing prompts: Clarifications, caveats, and cautions. Journal of Second Language Writing, 3, 231–255.
[4] Shay, S. (2005). The assessment of complex tasks: A double reading. Studies in Higher Education, 30(6), 663-679.