COURSE EVALUATION SURVEY RESPONSE PATTERNS
1 Honam Univeristy (KOREA, REPUBLIC OF)
2 Emory University (UNITED STATES)
About this paper:
Appears in:
EDULEARN12 Proceedings
Publication year: 2012
Pages: 3627-3635
ISBN: 978-84-695-3491-5
ISSN: 2340-1117
Conference name: 4th International Conference on Education and New Learning Technologies
Dates: 2-4 July, 2012
Location: Barcelona, Spain
Abstract:
This study examined the prevalence of monotonic response patterns (MRPs) with identical numerical ratings of an online evaluation of a foreign language course and the likelihood it may be related to student- and class-level variables. Using the frameworks of Barnette (1996; 1999) and Krosnick (1991; 1999), this study focused on the types and prevalence of response behavior patterns that may suggest “nonattending” and “satisficing” behaviors in an online evaluation of a foreign language course taught by native English speakers. Satisficing is the term that Simon (1957; cited in Krosnic, 1991) originally used as an alternative to the theory of “rational choice” (March & Simon, 1959, Simon, 1957). According to Simon, “most of the time human decision–making processes rely on finding a satisfactory choice, which satisfies the need, i.e. meets the minimum requirement, rather than on finding the best alternatives” (cited from Kaminska, et al., 2010). Because of possible satisficing even when survey participation rate is high, quality of the data may not be guaranteed.
This study explored specifically the credibility of internal consistency reliability with respect to the process of choosing identical responses using hierarchical generalized linear modeling (HGLM). This study also tested whether certain student- and class-level characteristics including course grade, freshman status, gender, and the number of survey participants in a class are associated with the likelihood of these response patterns and how some of these types of behavior are the indication of satisficing, affecting credibility/reliability and validity of the responses.
The data of this study came from an end-of-semester online evaluation of a Practical English course taken by 1,049 college students in 43 sections out of 230 Practical English courses at the target university in the fall semester of 2005.
In this study, the analysis focused on the responses to the thirteen common items to which students across all disciplines responded (i.e., whether students chose the same numerical response for every question consistently or not). Except for the two , the rest of the 11 item options were constructed on a five point Likert-type scale, in which “1” represents "strongly disagree,” “2” represents “disagree,” “3” represents “neutral/average,” “4” represents, “agree,” while “5” represents “strongly agree.” However, the other two items did not ask students’ response on the continuum of “strongly disagree” to “strongly agree.” Instead, on one item, “What was the pace of the course?,” the responses were on the continuum of “very slow” to“ very fast.” On the other item, “How do you feel about the course workload?,” the responses were on the continuum of “too little” to “too much”. Thus, higher scores on these two items would not reflect the respondent’s positive perceptions of the course as would be shown with the other 11 items. The results indicated that 38% of the students chose the same option numbers for all thirteen items. The results indicated that holding constant other variables, compared to sophomores and upperclassmen, freshmen were more likely to choose monotonic response patterns in course evaluation. Among the monotonic response patterns identified, the mono-extreme patterns (1-1-1-1-1 & 5-5-5-5-5 on a 1-5 scale) suggest inconsistent ratings. Implications of the results and the analytic approaches are discussed. Keywords:
Course evaluation, monotonic response pattern, hierarchical generalized linear modeling.