DIGITAL LIBRARY
EVALUATING THE RELATIONSHIP BETWEEN PRACTICE AND ACTUAL EXAMS IN AN INTRODUCTORY PSYCHOLOGY COURSE
University of Minnesota, Twin Cities (UNITED STATES)
About this paper:
Appears in: EDULEARN18 Proceedings
Publication year: 2018
Pages: 5275-5281
ISBN: 978-84-09-02709-5
ISSN: 2340-1117
doi: 10.21125/edulearn.2018.1278
Conference name: 10th International Conference on Education and New Learning Technologies
Dates: 2-4 July, 2018
Location: Palma, Spain
Abstract:
In a recent report on innovations in college teaching, Cary (2014) cited new research suggesting that practice exams improve student learning. The value of practice exams has been assessed for over three decades and researchers have made several important findings during that period of time (e.g., Balch, 1998; Brothen, 1996; Brothen, Lv, & Bai, 2015; Gurung, 2008; Knaus, Murphy & Holmes, 2009; Kulik, Kulik, & Bangert-Drowns, 1984; Lee-Sammons & Wollen, 1989; Maki & Serra, 1992). Generally, students learn more if they get feedback on their knowledge and how much additional study they need to do. Practice exams serve that purpose efficiently and effectively, with practice exam scores correlating highly (on the order of .70) with actual exam scores (Brothen et al., 2015). This study provides information to instructors about the effectiveness of practice exams and how students can use them to improve their course performance.

Data for this study came from a large introductory psychology course taught in three formats each semester via the Moodle course management system. The first format employed live lectures three days each week and the second utilized recorded online lectures. Both of these formats had live discussions once each week. Lectures and discussion were completely online in the third format. Students in all three formats completed practice exams and other activities online and took their actual exams (three 50 multiple-choice item midterm exams and one 100 item multiple-choice final exam) in a computerized testing center monitored by proctors. Students could take practice exams, which drew randomly from a large item pool, as many times as they liked. The practice exams corresponded to each actual exam and were available any time on any Internet-connected computer before and during the several day actual exam period.

At the end of the semester for each course, data were routinely collected on practice and actual exam performance, academic ability variables, and the Big 5 personality test for all students. For this study, we aggregated the data for five years (10 semesters) of the course that was taught nearly the same each semester. The course enrolled approximately 1,000 students a semester, resulting in a very large database of 10,000 students. This large sample allows us to find evidence for relationships that may not be visible in a single semester’s data.

In this report, we focus on two questions. First, what practice exam taking strategies (e.g., how many or on what schedule) are associated with better student performance on actual exams? Second, are there interesting and useful to know interactions with measures of personality and ability that indicate specifically who will benefit more? In single or typically sized classes, this is difficult to determine because the group sizes reflecting specific student characteristics are small. The large data set used in this study provides the opportunity to address these questions.

In this presentation, we will discuss data that bears on the above questions. We will also provide a profile of students’ practice exam taking behaviors and their relevant personality and ability characteristics that affect these behaviors. In addition, we will describe how our data suggests optimal test taking strategies for different groups of students. Finally, we will describe the statistical procedures we used so as to assist others doing similar research.
Keywords:
Practice exams, testing, learning strategies.