IQ Gains: Real Ability or Testing Artifact?

Authors

  • Áron Virág Eötvös Loránd University

Abstract

Cognitive abilities, as measured by instruments like IQ tests, are foundational constructs in cognitive science and robustly predict a wide array of important life outcomes. The accurate and valid assessment of these abilities is therefore paramount for both theoretical research and practical applications. A significant confounding factor in repeated cognitive assessments is the phenomenon of practice effects (PEs): systematic score improvements resulting from prior test exposure, independent of any genuine change in the underlying cognitive capacity. These effects can obscure true learning, developmental trajectories, or intervention-related gains, thereby challenging the interpretation of longitudinal cognitive data [1]. PEs are generally attributed to a confluence of mechanisms, including reduced test anxiety, increased familiarity with procedures, explicit recall of test items or answers, and the implicit optimization of problem-solving strategies [2].

This study aims to rigorously investigate PEs within the context of online, longitudinal IQ assessment, employing a design specifically intended to help disentangle these effects from true cognitive change. Participants will be assessed on three occasions using two distinct IQ testing formats administered online. The first format is a traditional, fixed-item IQ test, which will serve to establish baseline PEs commonly observed in such assessments. The second format will be a computerized adaptive test (CAT) version of an IQ measure. CATs dynamically tailor the sequence and difficulty of items presented to each individual based on their evolving performance. This tailored approach is hypothesized to offer a more precise and efficient measurement of ability at each time point. Furthermore, by continually adapting to the test-taker and potentially reducing reliance on memorization of a static item set, the CAT format may influence the manifestation of PEs differently compared to fixed-form tests, thereby aiding in the differentiation of true cognitive change from noise or item-specific learning.

The primary objectives are to compare performance trajectories on both the fixed-form and CAT IQ measures across the three assessment waves. This comparison will allow for: (a) quantification of the magnitude and typical pattern of PEs (b) an empirical examination of whether, and to what extent, CAT administration alters the nature or reduces the magnitude of PEs relative to standard test forms; and (c) insights into methodologies that can more effectively differentiate true cognitive ability changes from measurement artifacts introduced by repeated testing. Results are expected to offer practical guidance for longitudinal cognitive testing, helping to distinguish genuine cognitive shifts from measurement artifacts. This work supports more accurate cognitive assessment and research into cognitive change.

References

[1] J. Scharfen, J. M. Peters, and H. Holling, “Retest effects in cognitive ability tests: A meta-analysis,” Intelligence, vol. 67, pp. 44–66, Mar. 2018. doi: 10.1016/j.intell.2018.01.003.

[2] J. P. Hausknecht, J. A. Halpert, N. T. Di Paolo, and M. O. Moriarty Gerrard, “Retesting in selection: a meta-analysis of Coaching and Practice Effects for Tests of Cognitive ability,” Journal of Applied Psychology, vol. 92, no. 2, pp. 373–385, 2007. doi: 10.1037/0021-9010.92.2.373.

Published

2025-06-10