Algorithmic Power Influence on Collective Memory
Abstract
As interactions with Large Language Models (LLMs) become more integrated into everyday life, the question arises: what impact this might have. Based on the theory of extended cognition, when interacting with LLMs to gain knowledge, they become part of the human cognitive system and influence the creation of memory.
This research project focuses on the influence of interacting with LLMs on collective memory, understood as “a process […] for managing information about the past” [1], dynamically transmitted through communication, spreading knowledge, norms, values, and informing collective action [1].
In contrast to prior studies on collective memory that focus on human-human interaction, human-AI interaction introduces new dynamics due to how LLMs represent knowledge and how humans integrate it. LLMs are biased in temporality and spatiality, representing more Western and recent events, and are optimized to provide the most agreeable answer to the most users possible [2]. Without critical reflection by users, collective memory could therefore become hegemonic, shaped by algorithmic priorities as “an actor in the control of what can be said, seen, and archived” [1].
Because trust affects whether communication leads to integration of information into collective memory, we will system-prompt an LLM to represent knowledge in three ways shown to influence trust, and measure whether these affect the co-construction of collective memory. Trust will be operationalized as baseline trust in LLMs and trust in the specific version, measured by how much information users integrate and whether they know where the information came from.
Version 1 will focus on transparency, revealing contradictory information and potential biases; version 2 will be unaltered; version 3 will use persuasive language, enacting emotions and presenting information more one-sided.
The study will consist of three groups of 15 participants each, who will interact with one LLM version. Before the interaction, participants will complete a questionnaire about general trust in LLMs and receive baseline knowledge about a historical event through a written text. They will then interact with the LLM to gather as many details as possible about the event within a fixed number of prompts. Afterward, they’ll be interviewed about how they would explain the event to someone unfamiliar with it, if there was anything that changed their understanding and where they think most of their information comes from.
The interaction content (LLM behavior) and interviews (information integration, source awareness, trust) will be analyzed through Content and Framework Analysis. Results will be compared across LLM versions and participants’ baseline trust, answering: How do language and transparency in the representation of knowledge of LLMs shape the co-creation of collective memory and which role does trust play?
References
[1] J. Schuh, “AI As Artificial Memory: A Global Reconfiguration of Our Collective Memory Practices?,” Mem. Stud. Rev., vol. 1, no. 2, pp. 231–255, 2024. doi: 10.1163/29498902-202400012.
[2] S. Gensburger and F. Clavert, “Is Artificial Intelligence the Future of Collective Memory?,” Mem. Stud. Rev., vol. 1, no. 2, pp. 195–208, 2024. doi: 10.1163/29498902-202400019.
Published
Issue
Section
License
Copyright (c) 2025 Louisa Venhoff, Stephanie Gross, Brigitte Krenn

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.