Teaching a Robot to Draw
Abstract
Introduction
To make robots effective multi-functional helpers in our households, they must be able to learn new tasks by imitating humans. Studies show that human tutoring behavior changes based on a robot’s success in imitation [1]. Also, previous research found that the mere presence of the robot NICO led to bigger and quicker drawings compared to a setup where the robot was not present [2]. Our research explores how participants adapt when interacting with a robot in a tutor-student setting.
We expect participants’ drawings to be larger when the robot imitates them compared to when it merely observes the participant drawing. Drawings might be slower when the robot imitates as participants believe the robot is observing their movements to learn. We hypothesize that the error of the robots’ imitation will predict the level of simplification (changes in stroke count and drawing size) between the participants’ repetitions. Additionally, participants may adapt their drawings based on the robot’s intentional changes.
Methodology
We test these hypotheses with a within-subject design. Participants first draw five objects without the robot imitating them. Then, they draw seven objects that the robot imitates twice, with the robot repeating after each drawing. The first and second drawings for each object will be compared using error as an influencing factor. The first two objects are geometric forms, measuring adaptations in size and shift based on the robot’s rescaling and shifting without the possibility of simplification. The other objects are concrete concepts (such as e.g. “cake”).
Technical implementation
A program was developed that enables the robot to imitate human drawings sketched on a tablet. The program records coordinates, simplifies them with the Ramer-Douglas-Peucker algorithm, rescales and centers them, and feeds them as input to a Perceptron network. This network consists of two input neurons (x & y-coordinates), 50 hidden neurons, and eight output neurons (one for each motor). The network, trained with a grid of points spaced 32x32px, guides the robot’s movements. Deviations from the intended drawing are measured based on average distance.
Conclusions
We expect the strongest simplifications for the first couple of drawings with the participants quickly adapting to the robots’ capabilities. This would imply that humans approach the robots’ capabilities when teaching. Leveraging human adaptability can simplify many challenges in household robot design, as minor human adaptations can mitigate major technological challenges. For example, a robot may not need the exact movement capabilities of a human, since the human can adapt their movement to the robots’ capabilities in teaching.
References
[1] A.-L. Vollmer et al., “People modify their tutoring behavior in robot-directed interaction for action learning,” 2009 IEEE 8th International Conference on Development and Learning, Jan. 2009. doi:10.1109/devlrn.2009.5175516
[2] C. Mazzola et al. “Sketch it for the robot! How child-like robots’ joint attention affects humans’ drawing strategies”, 2024 IEEE International Conference on Development and Learning 2024 [Preprint].