Teaching a Robot to Draw

Authors

Abstract

Introduction

To make robots effective multi-functional helpers in our households, they must be able to learn new tasks by imitating humans. Studies show that human tutoring behavior changes based on a robot’s success in imitation [1]. Also, previous research found that the mere presence of the robot NICO led to bigger and quicker drawings compared to a setup where the robot was not present [2]. Our research explores how participants adapt when interacting with a robot in a tutor-student setting. 

We expect participants’ drawings to be larger when the robot imitates them compared to when it merely observes the participant drawing. Drawings might be slower when the robot imitates as participants believe the robot is observing their movements to learn. We hypothesize that the error of the robots’ imitation will predict the level of simplification (changes in stroke count and drawing size) between the participants’ repetitions. Additionally, participants may adapt their drawings based on the robot’s intentional changes.

Methodology

We test these hypotheses with a within-subject design. Participants first draw five objects without the robot imitating them. Then, they draw seven objects that the robot imitates twice, with the robot repeating after each drawing. The first and second drawings for each object will be compared using error as an influencing factor. The first two objects are geometric forms, measuring adaptations in size and shift based on the robot’s rescaling and shifting without the possibility of simplification. The other objects are concrete concepts (such as e.g. “cake”).

Technical implementation

A program was developed that enables the robot to imitate human drawings sketched on a tablet. The program records coordinates, simplifies them with the Ramer-Douglas-Peucker algorithm, rescales and centers them, and feeds them as input to a Perceptron network. This network consists of two input neurons (x & y-coordinates), 50 hidden neurons, and eight output neurons (one for each motor). The network, trained with a grid of points spaced 32x32px, guides the robot’s movements. Deviations from the intended drawing are measured based on average distance.

Conclusions

We expect the strongest simplifications for the first couple of drawings with the participants quickly adapting to the robots’ capabilities. This would imply that humans approach the robots’ capabilities when teaching. Leveraging human adaptability can simplify many challenges in household robot design, as minor human adaptations can mitigate major technological challenges. For example, a robot may not need the exact movement capabilities of a human, since the human can adapt their movement to the robots’ capabilities in teaching.

References

[1] A.-L. Vollmer et al., “People modify their tutoring behavior in robot-directed interaction for action learning,” 2009 IEEE 8th International Conference on Development and Learning, Jan. 2009. doi:10.1109/devlrn.2009.5175516

[2] C. Mazzola et al. “Sketch it for the robot! How child-like robots’ joint attention affects humans’ drawing strategies”, 2024 IEEE International Conference on Development and Learning 2024 [Preprint].

Author Biographies

  • Xenia Daniela Poslon, Comenius University Bratislava

    Assistant Professor at the Centre for Cognitive Science, Faculty of Mathematics, Physics and Informatics, Comenius University in Bratislava

  • Carlo Mazzola, Italian Institute of Technology

    Carlo Mazzola is Research Fellow at the Italian Institute of Technology in the unit Cognitive Architectures for Collaborative Technologies (CONTACT), where he works for the EU-funded project TERAIS (Towards Excellent Robotics and Artificial Intelligence at a Slovak university). He graduated (MS) in Philosophy at Università Cattolica del Sacro Cuore di Milano (Italy), after a period at the Bergische Universität Wuppertal (Germany), with a thesis about the role of imagination and empathy in perception. After a period as Visiting Scientist at the Italian Institute of Technology in the CONTACT unit, in which he explored mechanisms of social perception in HRI, in 2019 he started his Ph.D. in the same institute in the unit Robotics, Brain and Cognitive Sciences (RBCS), in which he investigated and developed mechanisms of Shared Perception between humans and the humanoid robot iCub. In 2022 he has been Visiting Student at the Cognitive Robotics Lab of the University of Manchester (Department of Computer Science).

Published

2024-06-10