DIGITAL LIBRARY
CLOSED FORMULA OF REQUIRED ITEM NUMBER FOR ADAPTIVE TESTING WITH MEDIUM PROBABILITY OF ITEM SOLUTION
1 ELTE Eötvös Loránd University, Doctoral School of Education (HUNGARY)
2 ELTE Eötvös Loránd University, Institute of Education (HUNGARY)
3 Károli Gáspár University of the Reformed Church in Hungary, Institute of Psychology (HUNGARY)
About this paper:
Appears in: EDULEARN22 Proceedings
Publication year: 2022
Page: 5432 (abstract only)
ISBN: 978-84-09-42484-9
ISSN: 2340-1117
doi: 10.21125/edulearn.2022.1284
Conference name: 14th International Conference on Education and New Learning Technologies
Dates: 4-6 July, 2022
Location: Palma, Spain
Abstract:
The proliferation of computer-based tests has many consequences. These include the emergence of interactive task forms other than those used in traditional paper-and-pencil tests (Mullis &martin, 2017), the use of log files monitoring response time and mouse usage (van der Linden et al., 2007), the topic of mode effect from different test surfaces (Fishbein et al., 2018) and a new measurement design of adaptive testing (van der Linden & Glas, 2010), which is made possible by automatic item scoring and quick calculation of ability during the test administration. In the field of adaptive testing, there are numerous empirical results, either on computerized (Weiss, 2011) or multistage adaptive (Yamamoto et al., 2018) testing. There is a similarity between simulation and classroom/experimental results in that in some cases adaptive tests can measure the ability of the respondents with the same quality as traditional linear test versions with up to half as many items.

Due to the complexity of the problem, few articles discuss the closed formula, the mathematical relationship between the length of the tests, the difficulty of the items, the number of test takers and the number of proficiency levels. Our aim was to give a formula for the minimum sample size for the item calibration (minimum number of respondents required at a given measurement error) in the case of medium-difficulty tasks (50% probability of solution) commonly used in adaptive tests, based on the general formula of our previous result (T. Kárász & Takács, 2021). Furthermore, if we have a sufficient number and variety of items calibrated with sufficient accuracy, which is a prerequisite for the adaptive measurement technology, then in the case of the adaptive test composed from this item bank, we should also create a closed formula for determining the required test length based on the expected number of test takers. In this case, the length of the test required according to the expected number of fillers, the expected accuracy, and the number of proficiency levels of the assessment is given in order to the expected measurement error accuracy of the respondents’ ability estimation. To do this, we have made restrictions that are usually fulfilled in practical use. These constraints include deleting items from the item bank that have either been resolved by everyone/or no one, using no more than 7 to 8 proficiency levels, and interpreting and determining the measurement error to the extent of the expected standard deviation.

In our paper, in addition to meeting these conditions, we present the expected minimum sample size and test length calculated on the basis of the closed formula, and we highlight the practical consequences obtained on the basis of the calculations.
Keywords:
Adaptive testing, error estimation, IRT, test length, sample size, closed formula.