G. Gorrell, N. Ford, A. Madden, P. Holdridge, B. Eaglestone

Sheffield University (UNITED KINGDOM)
This paper arises from an investigation of the metacognitive skills of higher education students engaged in inquiry-based learning, and how web-based interventions may help them improve their ability to search online for academic information for their coursework. The project is funded by the UK Arts and Humanities Research Council (AHRC) and is being conducted in collaboration with the University of Sheffield’s Centre for Inquiry-based learning in the Arts and Social Sciences (CILASS).

This project has thrown up some important methodological issues which potentially affect all research studies of learning which employ self-report inventories or questionnaires.

Much research into learning entails the study of students’ perceptions and behaviour. Often, data concerning behaviour are sought using inventories or questionnaires, in which respondents are asked to report their perceptions of their own behaviour. Questionnaires are also much used in education science and the social sciences in general, having particular advantages in terms of low expense, wide potential reach and ease of administration.

Despite widespread use, this approach can be problematic. Research has illustrated a variety of ways in which data obtained using questionnaires may be compromised. Common method variance (CMV) refers to the situation where the method of data gathering itself introduces a bias, leading to spuriously elevated correlations between the concepts being measured. However, many studies of learning fail to take into account this potential for bias.

This paper explains CMV and the threat it holds. It also suggests a number of ways in which the effects of CMV can be controlled for, and presents an illustration from our work on metacognition. This work entailed the development of MILK, a metacognition inventory designed to relate to students’ web-based information seeking to support inquiry-based learning. Factor analysis of data obtained using this instrument revealed a pervasive first component, suggesting the possible presence of CMV and limiting our ability to draw conclusions from the data.

This led to consideration of possible ways of assessing CMV. Options in studies that employ only one method are limited. “Harman's single factor test” is a widely-used option. However, the test appears to have no foundation. Where a theoretically unconnected “marker variable” is included in the questionnaire ahead of time, CMV can be isolated as the covariance of the marker variable with the other questionnaire items. We discuss the ways in which this approach can be misused, but suggest that, used carefully, it can be valuable in determining the extent of CMV in results and isolating it so that the remainder of the data can be safely used.

In our study, a marker variable was added to the questionnaire, and further data was collected. This analysis allowed us to quantify the extent of CMV in the original data, within certain limits and subject to certain provisos, thus allowing an improved interpretation of it. We present a revised analysis of our data in the light of the marker variable test, and discuss the ways in which learnings from this case might apply to future research into learning using inventories and questionnaires.