Exploring the Role of ChatGPT in Czech Culture

Authors

  • Klára Petrovická Comenius University in Bratislava

Abstract

Large language models have recently garnered widespread popularity and significant media attention due to their impressive performance on various tasks. ChatGPT, a chatbot developed by OpenAI, is one such implementation of a large, pre-trained language model that became a viral sensation due to its: (a) natural language-like capabilities and qualities (it can write poems, formulate university essays and homework, and exhibit a sufficient level of coding) and (b) its user-friendliness, reachability, and availability [1]. Consequently, the application reached one million subscribers in only five days. However, little hard evidence is available regarding its impacts on society. Understanding such societal consequences is essential because it can provide insights into the potential success or failure of ChatGPT and mitigate the risks regarding its safe deployment [2]. However, it is necessary to analyze the model and the social environment in which it operates since AI systems alter human behavior and vice versa [3].

Methodology

Therefore, we conduct an interdisciplinary mixed-method study exploring the role of ChatGPT in specific cultural settings of the Czech Republic. We first theoretically and systematically review the model architecture and its evolution, benefits, risks, and artificial general intelligence (AGI) potential. Then, we perform in-depth qualitative analyses of four experts’ interviews (thematic analysis) and 201 Czech news articles collected over three months (content analysis). Through the analyses, we define the main topics about ChatGPT among Czech populations.

Results

While ChatGPT’s AGI narrative is often discussed, its ecological burden, business case, monetization practice, and working conditions of digital workers are omitted. Our results show that people in the Czech Republic have a low adoption rate of ChatGPT due to a missing regulatory body and fear of novelty. The fear is artificially created by media dominance of topics, such as technological competition, job dismissal, and an apocalypse caused by ChatGPT. One topic exception is education, where overwhelmingly positive sentiment is expressed. Regulatory actions are needed to establish fair practice and mitigate potential societal risks.

References

[1] T. B. Brown et al., “Language models are few-shot learners,” arXiv:2005.14165v4, 2020. doi:10.48550/arXiv.2005.14165

[2] L. Weidinger et al., “Ethical and social risks of harm from Language Models,” arXiv:2112.04359v1, 2021. doi: 10.48550/arXiv.2112.04359

[3] I. Rahwan et al., “Machine behaviour,” Nature, vol. 568, no. 7753, pp. 477–486, 2019. doi:10.1038/s41586-019-1138-y

Published

2023-06-05