Comparing the efficacy of computerized adaptive and fixed-item testing
Main Article Content
Abstract
With the rapid spread of computers in the past two decades, linear paper-and-pencil tests are gradually being replaced by computer-based assessment. The most advanced form is computerized adaptive testing, in which the test is adapted to examinees’ ability level by only administering items of appropriate difficulty. The aim of this paper is to compare the effectiveness of fixed-item and adaptive tests from an assessment perspective by: (1) relating differences in student level achievement; (2) outlining item difficulties of delivered tests; and, finally, (3) comparing measurement error and test information functions in linear and adaptive test environments. The samples from the pilot study were drawn from children in Years 5 and 8 at Hungarian primary schools (N=158). A fixed-item test was administered to half of the participants; the other part took four-stage adaptive tests (1-3-3-3 structure). Two weeks later, the types of test were switched. Both tests measured inductive reasoning. A one-parameter Rasch model was used for the analyses. The reliability of the adaptive tests proved to be higher (Cronbach-=.85) than that of the fixed test form (Cronbach-=.83). The adaptive test provided consistently higher information at every skill level than that of the fixed-form test. The standard error of the four-stage test was significantly lower, especially in upper and lower ability levels. The study provided a promising step towards more precise educational assessment in using multistage testing with even three stages besides traditional linear test forms.