Embodied AI and Artificial Moral Agency

Authors

  • Maximilian Florka University of Vienna

Abstract

This talk presents an ongoing theoretical project to clarify the relationship between the particularities of how artificial agents are embodied and their capacity for moral agency. Recent advances in machine learning have engendered unprecedented possibilities in cognitive robotics, raising difficult questions about the kinds of agency artificial cognitive systems can possess. The question of what is required for artificial moral agency (AMA) is of particular importance as increasingly sophisticated artificial agents are developed and integrated into our economic and personal lives.

In contemporary machine ethics, there is some consensus that moral sensitivity – “the uncodified, practical skill to recognise, in a range of situations, which features of situations are morally relevant, and how they are relevant” – is a precondition for AMA [1]. Thus far, however, artificial moral sensitivity has eluded developers. Top-down approaches, in which agent behavior is guided by fixed rule sets, have proven too rigid to appropriately guide behavior across the full range of situations an agent may encounter as it interacts with humans and other moral agents. Meanwhile, bottom-up approaches, in which artificial agents learn moral behavior from training examples, have so far failed to produce generalizations that correspond with human moral judgment [1].

There is reason to think that artificial agents require embodied interaction with the world to acquire the concepts necessary to recognize morally relevant features of situations. Indeed, embodied interaction with the world may be required to develop robust and flexible conceptual-intentional frameworks generally. As Pezzulo, et al. put it, “generative models need to be learned through sensorimotor experience via exchange with a world that is actionable" [2]. Moreover, if empathy is involved in the development of moral sensitivity in humans, as some philosophers and psychologists have claimed, the capacity for shared embodied experience may be necessary for AMA [3]. Arguments for and against the idea that empathy is required for moral sensitivity are discussed, as are the prospects for building empathy into artificial agents and the challenges involved in current approaches.

Drawing on philosophical theories of moral sensitivity as well as on empirical research in neuroscience and psychology, the relevance of several aspects of embodiment are considered in connection with moral sensitivity, including metabolic and hormonal processes, affective sensation, and interoception. Next, various approaches to artificially emulating such processes are considered, as is the plausibility of suggested computational analogues. Finally, the prospects for developing artificial moral agents in the future are assessed in light of the current state of scientific understanding and rate of technological progress.

References

[1] Graff, J. (2024). Moral sensitivity and the limits of artificial moral agents. Ethics and Information Technology, 26(1). https://doi.org/10.1007/s10676-024-09755-9

[2] Pezzulo, G., Parr, T., Cisek, P., Clark, A., & Friston, K. J. (2024). Generating meaning: active inference and the scope and limits of passive AI. Trends in Cognitive Sciences, 28(2), 97–112. https://doi.org/10.1016/j.tics.2023.10.002

[3] Martinho, A., Poulsen, A., Kroesen, M., & Chorus, C. (2021). Perspectives about artificial moral agents. AI And Ethics, 1(4), 477–490. https://doi.org/10.1007/s43681-021-00055-2

Published

2024-06-10