INVESTIGATING THE EFFECT OF CONTEXT ON COMPARABILITY OF COMPUTERIZED PERFORMANCE-BASED TASKS
Higher School of Economics (RUSSIAN FEDERATION)
About this paper:
Conference name: 13th International Conference on Education and New Learning Technologies
Dates: 5-6 July, 2021
Location: Online Conference
Abstract:
Nowadays performance-based tasks (PBTs) are broadly used to assess complex constructs such as 21st-century skills. Shute V. developed a competency model for measuring problem-solving skills via performance tasks (Shute et al, 2016) Other computerized performance tasks were developed for assessing critical thinking (Braun, Shavelson, Zlatkin-Troitschanskaia & Borowiec, 2020) and communication skills (Stadler, M. et al, 2020).
Broad use of PBTs in educational assessment led to a necessity to construct different forms of the same task to test a large number of examinees at different times or locations. In order to ensure the fairness of the assessment examinees' responses must be comparable regardless of the form they are presented.
However, developing comparable forms poses a challenge for performance assessment. Researchers conclude that construct-irrelevant variance is mostly contributed to the instability of test results in performance-based assessment due to practice or context effects (Muraki, Hombo & Lee, 2000). The context in PBTs can be broadly defined as a characteristics of the situation that test-takers face in the test. The extent to which PBT forms differ by context are comparable provide evidence for test validity.
Contextual issues are discussed in several studies devoted to the comparability of personal questionnaires (Schmit et al., 1995), situational judgment tests (Krumm et al., 2015), essay prompts (Homayounzadeh, Saadat & Ahmadi, 2019). Recent works investigated contextual clues in virtual environments of computer-based tests. For example, according to Nelson and Guegan (2019), the context of virtual environments can influence creative processes such as idea generation.
The purpose of the paper is to provide comparability evidence of the PBT forms with respect to different test context.
In this study we used test data from computerized PBT developed following Evidence-Centered Design (Mislevy, Almond & Lukas, 2003) for measuring communication and cooperation skills of secondary school children. The instrument included several tasks where respondents interacted with computer agents in simulated enviroment. In each task students were presented with answer options to communicate with agents. Task problems set in real-life or fantastic contexts. In one task form students went camping with classmates in the woods, in another form they got into the magic world. Thus, two forms contain the same indicators for communication and cooperation skills, but the context differs.
The confirmatory factor analysis was applied to the data to examine internal structure and equivalence of PBT forms with different contexts. Responses were available from a sample of more than 500 students from Russian schools who took both forms. Findings indicate that that PBT forms with different context can be considered comparable and are likely to produce similar results.
The results present a new insight regarding the issue of fairness in the assessment using performance tasks. Performance tasks serve as promising test format for measuring complex constructs but pose challenges for educators and psychometricians. The challenge is compounded by the context of performance tasks which changes from one form to another and thus could threaten the comparability of test results. The role of the context on the comparability of PBTs will be discussed.Keywords:
Performance-based tasks, comparability study, test context, 21st century skills.