DIGITAL LIBRARY
ASSESSING STUDENT LEARNING: RESEARCH-BASED RUBRICS AND EVALUATION INSTRUMENTS FOR MEASURING TECHNOLOGY-AUGMENTED LEARNING
1 North Carolina State University (UNITED STATES)
2 Purdue University (UNITED STATES)
About this paper:
Appears in: INTED2015 Proceedings
Publication year: 2015
Pages: 7715-7724
ISBN: 978-84-606-5763-7
ISSN: 2340-1079
Conference name: 9th International Technology, Education and Development Conference
Dates: 2-4 March, 2015
Location: Madrid, Spain
Abstract:
Introduction:
Broadfoot and Black (2004) view our contemporary emphasis on evaluation as part of the same orientation, describing how “the assessment revolution” permeates our institutions and policies: "We have become an 'assessment society,' as wedded to our belief in the power of numbers, grades, targets and league tables to deliver quality and accountability, equality and defensibility as we are to modernism itself. History will readily dub the 1990s . . .—as well as the early years of the new millennium—'the assessment era,' when belief in the power of assessment to provide a rational, efficient and publicly acceptable mechanism of judgment and control reached its high point" (p. 19). Herron and Wright (2006) agree, noting that "At no other time in the history of higher education have there been so many inquiries into accountability for student learning, progress, and degree program viability. Funding for higher education has, in some states, been sharply reduced and any funding increase in the future may be linked to accountability" (p. 47).

Just as the relationship between research and practice in educational technology has been an uneasy one, so too has the historical relationship between educational technology and assessment or evaluation. As Marshall (1999) points out, “Our neglect of assessment may be due to the incremental and ad hoc way technology educators have approached curriculum building for technology and our unwillingness to address issues such as what learning theory tells us about ways technology is being used versus the way technology could be used” (p. 315).

Objetives:
Against this backdrop of institutional change, we present research on methods for assessing student learning, particularly technology-augmented learning. Student learning can be measured in terms of cognitive, affective, and psychomotor performance (Dooley, Linder, & Dooley, 2005). Numerous instruments for measuring student learning have been documented in the evaluation literatures and we will provide a summary of these instruments for use by instructors and researchers interested particularly in technology-augmented learning contexts.

Methodology:
The literatures that we will draw on include evaluation journals (e.g., Assessment and Evaluation in Higher Education), distance education (e.g., Educational Technology & Society), and human-computer interaction (e.g., Computers and Human Behaviour). Our review of instruments will include both qualitative and quantitative tools, will draw on various disciplinary research and teaching contexts, and will anticipate a range of learning environments (blended, online, and face-to-face).

Our literature review will include a list of relevant research articles and instrument descriptions (e.g., surveys, focus groups, direct observation, etc.). In particular, we focus on instruments that measure student problem solving and critical inquiry as these two domains are of great interest to contemporary researchers.

Conclusions:
Our conclusions will include an annotated bibliography of references (and journals cataloged), summaries of the evaluation tools provided, and recommendations for assessing technology-augmented learning environments. A discussion of the pressures being placed on instructors and researchers working in emerging learning environments (versus traditional, face-to-face classrooms) will be discussed. Implications for research and teaching in "the assessment era" will be described.
Keywords:
Assessment, Evaluation, Student Learning, Technology-augmented Learning