Extending Code-VEPs: Detecting Cognitive Difficulty in Mental Rotation Tasks

Authors

  • Philipp Kappus University of Vienna
  • Moritz Grosse-Wentrup University of Vienna
  • Michal Robert Žák University of Vienna
  • Clémence Cochard Université Paris-Saclay

Abstract

Brain–computer interfaces (BCIs) have demonstrated significant potential for both direct control applications and cognitive state monitoring. While Visual Evoked Potentials (VEPs) are well established in control-based BCIs, only limited research has been done investigating their utility for cognitive state detection. This study explores this novel application by testing the predictive power of VEPs for difficulties of understanding during mental rotation tasks.

VEPs are electrical responses elicited in the visual cortex that occur when a person is presented with a periodic flickering visual stimulus and are measured using EEG. The electrical activation reflects the pattern of the flickering stimulus. Current BCIs use VEPs for control: Multiple flickering stimuli, each with a different pattern represent an action. By matching the measured pattern of activation in the visual cortex to the presented stimuli, a system can decode which action the person is looking at. In contrast, little research has been conducted investigating their use for cognitive state detection. Notably, Ladouce et al. [1] found a decrease in VEP strength right before a vigilance drop in a long-term attention task. 

Building on that, we conduct a first study to investigate the predictive power of VEPs for difficulties during a Mental Rotation Task (MRT).  In these tasks, subjects are presented with a three-dimensional base figure and several target figures of which some are rotated copies of the base figure. The task is to identify those which are congruent to the base figure. Their difficulty makes them ideal for studying difficulties of understanding.

In our study, we employ a MRT in which six target figures are each overlaid with a unique flickering pattern to elicit a VEP. For each task, the subjects have limited time to select congruent figures. We record eye movement, EEG data, and accuracy of the participants' responses. Using the eye-movement data, dwell time on each target figure can be calculated. 

We correlate the EEG data to the target figure on which the subject was focusing on and hypothesise that during periods of difficulties (indicated by long dwell time and lower response accuracy) the strength of the corresponding VEP is lower. Critically, we hypothesise that this reduction in VEP amplitude can be detected even in the presence of simultaneous visual stimulation from all six flickering targets. Optimally, we may even be able to identify which specific figures caused understanding difficulties based solely on the VEPs (without requiring eye-tracking data to determine gaze location).

This study aims to extend the application of VEPs beyond traditional BCI control tasks by demonstrating their use in predicting phases of difficulties of understanding. We aim to contribute to the development of BCIs capable of responding to user understanding, with potential applications in high-stakes work environments.

References

[1] S. Ladouce, J. J. Torre Tresols, K. Le Goff, and F. Dehais, “EEG-based assessment of long-term vigilance and lapses of attention using a user-centered frequency-tagging approach,” bioRxiv, 2024. [Online]. doi: 10.1101/2024.12.12.628208

Published

2025-06-10