Competing Approaches to Artificial General Intelligence


  • Maximilian Florka University of Vienna


This research project explores the concept of artificial general intelligence (AGI) as it is currently used in cognitive science and AI engineering.

The key research question is whether any consensus exists about what AGI consists of, either in the scientific literature or in industry. This question has not been adequately addressed in the scientific literature primarily because AGI as a concept has only recently gained widespread currency, though its historical roots are deep.

Taking inspiration from Pei Wang [1], I argue that while there is no overall consensus about what constitutes AGI, approaches to developing AGI fall into two broad categories: Single principle systems and multiple principle systems. Single principle systems aim to solve a wide range of problems using a single rational principle or learning approach, while multiple principle systems integrate various distinct problem-solving techniques into a single agent.

I argue that multiple principle systems constitute a more promising approach to AGI than single principle systems. I illustrate this point with a paradigm case – the Soar cognitive architecture. Soar was originally conceived as a single principle system by Allen Newell [2] but eventually morphed into a multiple principle system under John Laird [3]. I argue that this evolution reflects the unavoidable practical limitations of single principle systems and points to the greater promise of multiple principle systems.

My primary methodology is scientific literature review, mostly focused on contemporary work but with some attention to historical work. In addition, I review some public statements and media interviews by industry leaders.

This research is timely, given the present boom in AI investment, development and deployment. Several industry leaders, including Sam Altman, CEO of OpenAI, have recently proclaimed their intention to pursue AGI, and the considerable resources they wield mean we should take these ambitious seriously. As the pursuit of AGI heats up, research like this has considerable implications for how scientists, policymakers, and the public should understand and evaluate claims about AGI.


[1] P. Wang, “On defining artificial intelligence,” Journal of Artificial General Intelligence, vol. 10, no. 2, pp. 1–37, 2019. doi:10.2478/jagi-2019-0002

[2] A. Newell, Unified Theories of Cognition. Cambridge, Mass: Harvard Univ. Press, 2008.

[3] J. E. Laird, “Extending the soar cognitive architecture,” Frontiers in Artificial Intelligence and Applications, vol. 171, pp. 224–235, Jan. 2008. doi:10.21236/ada473738