A QUANTITATIVE ANALYSIS OF STUDY ABROAD PROCESS EVALUATION
The number of tertiary students experiencing study abroad rose more than twofold worldwide from 1988 to 2016 (OECD 2018). Previous research acknowledged the importance of study abroad assessment (Engle et al. 2003/2004, Savicki et al. 2015), and proposed an evaluation method of study abroad process for the prospected study abroad outcome (Kotani et al. 2018).
The evaluation during study abroad can be carried out between students abroad and teachers at home institution using the Internet. Teachers ask students questions about the study abroad process, and give them comments/instruction to improve the process. Teachers can use a rubric (AAC&U 2016) by asking at least four questions or as many as 24 questions such as "Does a student effectively address significant issues in the natural and human world based on articulating his/her identity in a global context?" This task is time and effort consuming for teachers.
However, this task is lightened by decreasing and simplifying questions under the evaluation method (Kotani et al. 2018). The study abroad process was examined by asking 10 multiple-choice questions and 5 open-ended questions such as “Do you think you want to study languages further and improve your communication skills?” “What can you do with the language you have learned? Provide detailed examples.” The multiple-choice questions include an eight-point multiple question for study abroad process, “To what extent did you achieve what you had hoped for before studying abroad.”
The present study investigated the reliability of the questions (Kotani et al. 2018) using Cronbach α coefficient for the internal consistency that refers to whether students’ answers demonstrate similar results for questions. The coefficient ranges from 0.0 (absence of reliability) to 1.0 (presence of reliable). Acceptable level of reliability is more than or equal to 0.8.
The study abroad outcome data were alternatively used as study abroad process data, which were collected from 334 students who completed study abroad.
The experiment showed:
(1) an acceptable reliability in the sentence length of open-ended answers (α = 0.81),
(2) an unacceptable reliability in multiple-choice questions (α = 0.73) except for the direct question about the study abroad process, and
(3) unreliability of the direct question of study abroad process, because the addition of the direct question decreased the reliability of the multiple-choice questions (α = 0.32), and that of the sentence length (α = 0.79). The results suggest the need for improving the multiple-choice questions, especially the direct questions.
 AAC&U (2016) Global Learning VALUE rubric, retrieved from https://www.aacu.org/value/rubrics/global-learning.
 L. Engle & J. Engle (2003) Study Abroad Levels: Toward a Classification of Program Types1, Frontiers: The Interdisciplinary Journal of Study Abroad, vol. 9, no. 1, pp.1–20.
 L. Engle & J. Engle, (2004) Assessing Language Acquisition and Intercultural Sensitivity Development in Relation to Study Abroad Program Design, Frontiers: The Interdisciplinary Journal of Study Abroad, vol. 10, pp. 219–236.
 K. Kotani & M. Uchida (2018) A Study-abroad Supporting Tool for the Acquisition of Study Abroad Outcomes, ICERI2018 Proceedings, pp. 1471–1475.
 OECD (2018) Education at a Glance 2018: OECD Indicators. Paris: OECD Publishing.
 V. Savicki & E. Brewer, E. (Eds.) (2015) Assessing Study Abroad: Theory, Tools, and Practice. Virginia: Stylus Publishing.