DIGITAL LIBRARY
VALIDATION OF AN EXPERT PROBLEM-SOLVING BEHAVIOUR SCALE FOR COMPUTER SCIENCE EDUCATION
University of Guelph (CANADA)
About this paper:
Appears in: ICERI2020 Proceedings
Publication year: 2020
Pages: 6755-6764
ISBN: 978-84-09-24232-0
ISSN: 2340-1095
doi: 10.21125/iceri.2020.1438
Conference name: 13th annual International Conference of Education, Research and Innovation
Dates: 9-10 November, 2020
Location: Online Conference
Abstract:
Problem solving is a critical skill for computer scientists to have. Within the scope of this paper, problem solving in computer science refers to designing and writing a program to provide a solution or accomplish a task in response to a specific problem prompt. Problem-solving skills therefore combine multiple meta-cognitive processes with computational thinking, as well as specific programming skills. For instance, task comprehension, task decomposition, planning and research, evaluation and reflection, pattern recognition, program state awareness, tracing, and debugging would all be considered problem-solving skills. Traditionally, it is assumed that novices will develop strong problem-solving skills in tandem with learning new computer-science concepts, without direct instruction. Recent developments challenge this latent approach and promote explicit instruction in specific problem-solving strategies, spanning code tracing, code reuse, debugging, design strategies, and learning subgoals. Given the recent interest in explicit instruction for problem solving vs. latent approaches, we have a need to measure expert-like behaviour in students both before and after instruction. The work presented here is concerned with observing and measuring the behaviours that experts use when attacking programming problems, in the pursuit of two goals: i) Inform explicit instruction in problem solving for novices, and ii) Develop an assessment instrument that measures expert-like problem-solving ability.

Firstly, we used an observational protocol to document the behaviours exhibited by advanced, successful computer-science students in a post-secondary setting. Secondly, the observed behaviours were translated into an attitudinal assessment instrument, and the proposed assessment items were judged by both faculty and industry experts. Finally, the refined assessment instrument was validated on the undergraduate population of interest using factor analysis. A main contribution of this paper is the detailed discussion of best practices in validation analysis, supported by literature review and supplemented with code to promote reproducibility. This discussion applies to any validation study, and will be of general interest to the wider educational assessment community. The output of this study is a validated assessment instrument of expert-like problem-solving ability in computer science. Researchers and practitioners can use this instrument to measure changes in problem-solving ability over time, or in response to a specific teaching intervention.
Keywords:
Problem solving, assessment instrument, factor analysis.