DIGITAL LIBRARY
RAPID CYCLE STUDIES FOR EDUCATIONAL TECHNOLOGY RESEARCH & DEVELOPMENT
American Institutes for Research (UNITED STATES)
About this paper:
Appears in: INTED2017 Proceedings
Publication year: 2017
Page: 7465 (abstract only)
ISBN: 978-84-617-8491-2
ISSN: 2340-1079
doi: 10.21125/inted.2017.1730
Conference name: 11th International Technology, Education and Development Conference
Dates: 6-8 March, 2017
Location: Valencia, Spain
Abstract:
Developers of educational technologies need high quality and timely information about what is and is not working for teachers and students, and what can be improved and how. This paper describes a set of approaches to applying rigorous evaluation methods to guide continuous improvement and measure impact of educational technologies at any stage of development. In particular, this paper will discuss the use of rigorous, flexible, and rapid-cycle studies to “optimize” programs and their features.

We propose that strong developer-researcher partnerships that originate early in the development process can produce more robust and effective educational technologies. Our approach to R&D is guided by the Multiphase Optimization Strategy (MOST; Collins, Murphy, & Strecher, 2008). MOST is designed to be practical, allowing developers to create more potent programs through strategic testing of the program during the development and initial implementation process.

In particular, optimization helps tease out which components of the intervention work as intended and which do not. In the optimization phase of development, rapid-cycle experiments can examine which features or versions of a product or program are most effective, or which implementation models work best for different types of users.

Rapid-cycle experiments used for R&D are shorter than typical evaluations (e.g., weeks or months instead of years), test specific aspects of a program to inform further development, and focus on more proximal outcomes, such as initial uptake or student engagement. For example, A-B testing in the field can give early information about the relative effectiveness of two or more features of the program, for different types of students. Factorial experiments can examine whether components of the program are more effective when combined with other components, or not.

Other innovative research designs for the purpose of testing and improving educational technologies include sequential multiple-assignment randomized trials (SMARTs). SMARTs are used to study adaptive interventions that change as a function of students’ response to program components. SMART studies, increasingly common in other fields including health and prevention, have great potential applicability to educational technologies, many of which are designed to be adaptive. Results from SMARTs allow developers to optimize their program by identifying the most effective adaptive pathways for different types of learners.

The authors are are currently conducting a study of an adaptive text messaging intervention using a SMART design, which we will use as an example in this paper. We will also describe other types of rapid-cycle evaluations and give examples of their application to educational technologies.

In addition, this paper will address the need for and examples of innovative measurement strategies, because the utility of “rapid cycle” studies is dependent on the availability of frequent, appropriate, and reliable measures. This paper will discuss the importance of identifying outcome measures best aligned with the goals of the program, with an emphasis on novel, unobtrusive measurement opportunities that make use of readily accessible user data. Of particular interest, emerging machine learning techniques (datamining and learning analytics) applied to educational technology usage data can be used to measure different aspects of student engagement and progress through learning trajectories.
Keywords:
Rapid cycle experimentation, R&D, educational technology.