Cognitive Load and Bias in AI-XR: Effects of Task Complexity and AI Interaction Modality

Authors

  • Anja Rejc University of Vienna

Abstract

Research [1] highlights the transformative potential of immersive technologies, including virtual, augmented, and mixed reality, in industrial training. These technologies have shown promising improvements in learning effectiveness, user engagement, and task performance [2]. However, XR environments can increase cognitive demands, requiring additional mental resources to process information. When cognitive load becomes too high, users may resort to heuristics or mental shortcuts, which, while efficient, can heighten susceptibility to cognitive biases and impair decision-making during both training and real-world tasks. The integration of AI into XR systems shows promise in addressing these cognitive challenges by providing real-time support [1]. But despite recent advances, critical gaps remain: the lack of extensive user studies and quantitative evaluation metrics, limited adaptivity of current systems to users’ cognitive and physiological states, and the absence of real-time mechanisms to detect and mitigate cognitive overload and cognitive bias [1]. Addressing these gaps is crucial for understanding the complex interplay between cognitive load and cognitive bias in immersive learning environments [3] and for designing more effective, adaptive AI-powered XR training systems.

This research addresses these gaps by investigating how the modality of AI guidance (e.g., text, audio, visual) and task complexity influence users’ cognitive load and susceptibility to automation bias or the tendency to over-rely on AI in XR-based training. We further investigate which physiological indicators, such as heart rate variability, pupil dilation, and gaze patterns, correlate most strongly with cognitive overload and biased decision-making.

We propose a mixed-methods experimental study using the Meta Aria Gen 2 headset and the XR-AI system developed in [1]. Participants (N=10) will complete tasks under varying conditions of AI guidance modality, task complexity and AI reliability. Data collection will include behavioural metrics (task accuracy, help-seeking behaviour, completion time), subjective measures (NASA Task Load Index questionnaire for cognitive load, trust in AI scales), and physiological data (HRV, pupil dilation, and gaze fixation).

This study aims to quantitatively examine how AI guidance modality and task complexity influence cognitive load and bias-related behaviour, and to evaluate how real-time physiological feedback can be used to detect and mitigate these effects. By advancing our understanding of these dynamics, this research ultimately aims to inform the development of more cognitively aware and cognitively adaptive XR-based training systems, supported by AI.

References

[1] T. Duricic, N. Weidinger, N. El Sayed, D. Kowald, and E. E. Veas, “AI-Powered Immersive Assistance for Interactive Task Execution in Industrial Environments,” in Proc. 27th Eur. Conf. Artif. Intell. (ECAI), 2024.

[2] S. Yoo, S. Reza, H. Tarashiyoun, A. Ajikumar, and M. Moghaddam, “AI-Integrated AR as an Intelligent Companion for Industrial Workers: A Systematic Review,” IEEE Access, vol. 12, pp. 191808-191827, 2024. doi: 10.1109/ACCESS.2024.3516536.

[3] G. Morales Méndez and F. del Cerro Velázquez, “Impact of Augmented Reality on Assistance and Training in Industry 4.0: Qualitative Evaluation and Meta-Analysis,” Applied Sciences, vol. 14, no. 11, p. 4564, 2024. doi: 10.3390/app14114564.

Published

2025-06-10