M. Milner-Bolotin1, J. Cha2, K. Hunter1

1The University of British Columbia (CANADA)
2Daegu University (KOREA, REPUBLIC OF)
In recent decades, educational technologies such as electronic classroom-response systems have gradually entered elementary and secondary science classrooms (Duncan, 2005; Milner-Bolotin, 2004). These technologies allow teachers to ask multiple-choice questions and to collect, analyze and display students’ responses instantaneously. However, in order to benefit from this technology, science teachers must learn how to ask meaningful and powerful multiple-choice science questions that will generate student discussion, challenge their misconceptions and facilitate science learning. It is important to note that pedagogical characteristics of effective multiple-choice questions suited for classroom-response systems are very different from those of good exam questions, homework problems, and in-class worked examples. While there has been a wide discussion on the design of effective conceptual science questions for the technology enriched classroom at the undergraduate level (Beatty, Gerace, Leonard, & Dufrense, 2006), little attention has been paid to elementary and secondary science teachers and their competency in designing effective multiple-choice science questions (Beatty et al., 2008). Current paper describes the design and evaluation of the Multiple-Choice Science Question Rubric (MCSQR) that has been developed to help pre-service elementary (K-8) teachers not only to construct but also evaluate their science questions. While effective multiple-choice questions can be used with electronic classroom-response systems, they can also be implemented in a low-tech classrooms (Lasry, 2008). Therefore the use of the Rubric extends beyond the technology-enabled classroom. Hence while designing the Rubric, we conscientiously decided not to focus on teachers’ technological knowledge, as we believe that teachers’ ability to ask meaningful and powerful science questions extends beyond one specific technology. MCSQR focuses on two domains of teachers’ knowledge that are being built during the teacher education program: teachers’ pedagogical knowledge and their content knowledge (Koehler & Mishra, 2009). MCSQR has been applied to evaluate 83 science questions designed by pre-service elementary science teachers in the context of a one-year long Teacher Education Program. The feedback on the question rating using MCSQR provided by a number of independent raters (science and mathematics educators) helped validate MCSQR instrument and improve it. In this paper we discuss the process of designing and validating MCSQR instrument, as well as propose suggestions for helping pre-service science teachers learn how to design and evaluate multiple-choice science questions that can be implemented in their teaching practice.