Display of the Robot’s Uncertainty as a Factor of Trust in HRI in an Industrial Setting
Abstract
Introduction
There are numerous models of trust, and the definition varies from author to author, but they generally agree that it is a dynamic concept that can change over time. We based our research on models of human trust, implementing it in human-robot interaction. Lewis et al. [1] characterise trust as an attitude about expected performance and the reaching of a certain goal. The three main areas of trust are purpose, process and performance, whether we are describing human-robot or human-human interactions.
Factors like transparency, predictability and reliability all contribute to the experience of trust, but most studies focusing on these are based on autonomous systems, where the machine does not ask for feedback or pose a question in case of uncertainty [2].
When the robot displays uncertainty, there is a high chance that the human partner will see that as a violation of trust, resulting in reports of untrustworthiness [3]. If the robot makes mistakes during the early period of cooperative work, then trust is affected negatively as compared to mistakes made later.
Method
In the experiment, the collaborative robotic arm Kassow Robots KR1410 will perform a sorting task, where it places different shapes to predetermined locations. This sorting will run according to a predefined program that cannot be modified by the participant. The participants (25/group, between the ages of 18-55) will be preinformed that during the task, the robot will show indecision by stopping over a category and not putting the object down for 5 seconds. The experimental group will solve the task under the illusion that they are active agents of interaction with the robot, because when the robot “hesitates”, they will “help” the robot “decide”. The control group will also see this uncertainty, but they will not be able to intervene. Before and after this task, the participants will be asked to fill out a questionnaire where we will explore the participant’s general attitude towards working with robots, then specifically about this robot.
Hypothesis
We hypothesise that the display of uncertainty from the robot in a simple sorting task will result in a lack of trust in the human participant if they have no agency in the situation. If they have agency and can help the robot if it asks for assistance, the uncertainty factor can help build trust.
Our research could help in the design process of industrial robots that have a social aspect and facilitate the relation of human-robot trust.
References
[1] M. Lewis, K. Sycara, and P. Walker, “The role of trust in human-robot interaction,” Foundations of Trusted Autonomy, pp. 135–159, 2018. doi:10.1007/978-3-319-64816-3_8
[2] P. A. Hancock et al., “A meta-analysis of factors affecting trust in human-robot interaction,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 53, no. 5, pp. 517–527, Sep. 2011. doi:10.1177/0018720811417254
[3] S. Agrawal and H. Yanco, “Feedback methods in HRI,” Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Mar. 2018. doi:10.1145/3173386.3177031