Cognitive Conflict When Interacting with Robots


  • Anja Huber University of Vienna


In contemporary society, the emergence of social and intelligent robots has become increasingly prevalent, offering promising prospects for integration into our daily lives [2]. However, research suggests that the interaction with robots can lead to cognitive conflict in humans [1]. This finding is worrying because the use of robots promises to be of great help in solving social problems such as caring for the ever-increasing proportion of the world's aging population [2]. One aspect that may influence cognitive conflict in human-robot interaction is the degree to which a task and its components are outsourced to the robot. The hereby proposed study aims to expand upon existing knowledge [1] by incorporating different levels of decision-making-agency (self-determined choice vs. externally determined choice) into the investigation of human-robot interaction. We hope that our findings will contribute to a better understanding of how to best design and implement socially intelligent robots for fruitful human-robot interaction. The research question is how different levels of decision-making agency, different values of decision-outcomes (winning or losing money) as well as different levels of outcome-ownership (giving the outcome to the robot or keeping the outcome to oneself) shape the level of cognitive conflict when interacting with robots.

To address this question, an online experimental framework will be employed, wherein participants engage in interactive tasks with a social intelligent robot named Cozmo. The robot materialized as an avatar is introduced in the beginning of the experiment. Participants will be presented with decision situations where either themselves or the robot decides for one of the two presented options, materialized as colored squares. Subsequently, the outcomes of these decisions, manifesting as monetary gains or losses, will be presented to the participants. The final phase of the experiment involves attributing the outcomes to either the participant or the robot, thus gauging perceived outcome ownership. To measure cognitive conflict, we examine reaction times during the outcome attribution phase of each trial.

Anticipated outcomes align with prior research, suggesting that positive and negative outcomes, given to the robot, will elicit heightened levels of cognitive conflict compared to keeping them to oneself [1]. Crucially, we hypothesize that cognitive conflict is moderated by decision agency during stimulus selection. Specifically, we believe that giving an outcome to a robot leads to less cognitive conflict when the prior decision was not taken by oneself.

In summary, this study aims to deepen our understanding of the dynamics underlying human-robot interaction, shedding light on the nuanced interplay of decision agency, outcome evaluation and outcome ownership in decision action situations when interacting with robots.


[1] A. Abubshait, S. E. Therkelsen, C. McDonald, and E. Wiese, “Forced prosocial behaviors towards robots induce cognitive conflict,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 67, no. 1, pp. 434–439, Sep. 2023. doi:10.1177/21695067231192897

[2] E. Broadbent, “Interactions With Robots: The Truths We Reveal About Ourselves,” Annu. Rev. Psychol., vol. 68, no. 1, pp. 627–652, Jan. 2017, doi: 10.1146/annurev-psych-010416-043958.