Promoting Epistemic Disagreement in Digital Spaces

Plurality and Artificial Intelligence


  • Maximilian Karge University of Vienna


As with most forms of (new) technology, Artificial Intelligence (AI) – in most use-cases denoting concepts of Machine Learning (ML) – can be seen as providing opportunities for societal and individual benefits, but also as posing dangers to democratic values and societies, as well as reinforcing already existing, problematic power structures. Today, there are already a many important decisions made by machine learning algorithms, so the impact they have – although often unnoticed – is hardly deniable and will probably only get stronger. One step that could contribute to a more reflective and diverse landscape of knowledge, specifically in digital spaces and AI, might be the development of tools detecting and visualising epistemic disagreements in the web. Epistemic disagreement is understood here, being reductionist, as any form of coexisting but differing accounts of the same phenomenon. For example, looking at the governance and societal management of the Covid crisis, there was a wide range of different, often contradicting public statements, incentives and management strategies, which in turn imply different interests, goals and efficiencies. More generally, the concept of disagreement, the coexistence of a plurality of epistemological lenses with which to look at phenomena in the production of knowledge is important, not only in making visible marginalized groups in society [1], but also in the (scientific) production of knowledge generally [1], [2]. Enabling plurality of epistemological claims through AI tools could open paths to more inclusive and democratic discourses in digital spaces. Hence, specific tools and applications would need to be conceptualized, as well as implemented in a specific way in order to pose a beneficial sociotechnical configuration [3]. This work aims to explore how such a tool could improve practices around the management of disagreements, especially in situations of societal crisis. Specifically, the goal is to investigate contemporary academic discourse on crisis management and digital governance in the context of healthcare, in order to gain an understanding of how epistemic disagreements are reified and practiced in applied contexts. An exploration of how the introduction of AI tools to visualize and represent such disagreements can benefit their management in a democratic and inclusive fashion follows. Expected results are an idea of how epistemic disagreements are and are not understood in the context of digital health governance, who is able to contribute, how they are communicated, and how such epistemic plurality in digital spaces might be shaped by AI tools aiming at finding, visualising and representing them.


[1] D. Haraway, "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective", Feminist Studies, vol. 14, no. 3, p. 575, 1988.

[2] W. Bijker and B. Latour, "Science in Action: How to Follow Scientists and Engineers through Society", Technology and Culture, vol. 29, no. 4, p. 982, 1988.

[3] M. Akrich, "The De-Scription of Technical Objects.", in Shaping Technology/Building Society: Studies in Sociotechnological Change, W. Bijker, Ed. Cambridge, MA: MIT Press, 1992, pp. 205-224.