How Are the Published AI Ethics Guidelines Applied?
Abstract
The field of Artificial Intelligence (AI) is growing swiftly, extending its reach across a multitude of areas whilst heavily affecting societal dynamics and economic growth [1]. The higher the transformative impact of AI technology, the more of a need evolves to regulate it due to the risks and challenges at stake. These might include the exploitation of AI for malicious intent, such as the creation of cyber weapons, the disruption of privacy, or the risk of autonomous warfare systems. Further, there are numerous ethical questions, such as determining the liability between the AI provider and the program itself. These challenges have already been identified and there is a range of guidelines and frameworks that have been suggested or implemented, such as the EU AI Act or the OECD AI principles, on a national or international scale. They follow a common theme- the mitigation of potential risks and the assurance of responsible, trustworthy AI technology.
Amidst the race of AI innovation and regulation, several crucial questions emerge, which shall be tackled in this study: Which AI ethical frameworks currently exist? How can they be differentiated? To what extent are they applied? To ensure that AI guidelines provide applicable value rather than portraying a theoretical construct alone, it is critical to investigate these questions. However, as of now there is a lack of literature providing an overview of all available frameworks and assessing their effectiveness in practical terms and therefore, this study aims to bridge this gap using the following methodology:
The Constructivist Grounded Theory after Charmaz [2] is employed to generate a comprehensive set of existing AI ethical frameworks. This method was chosen for two reasons. First, it is an explorative and iterative approach that allows for flexibility in shifting research focus during emerging insights in data collection. Second, it seeks to develop an explanatory theory through comparison of patterns in data rather than giving a descriptive account alone. In combination with data collection through a systematic literature review, following the structure suggested by Randles and Finnegan [3], four rounds of coding are carried out: Initial Coding, Focused Coding, Axial Coding, and Theoretical Coding. The last round of coding involves the development of a theoretical framework which incorporates the application, including use-case examples, of current AI ethical frameworks. The expected impact of this research is the valuable attainment of insight into the practical implications and limitations of AI regulations, which can then be used to adapt or revise current frameworks for enhanced efficacy.
References
[1] R. C. Lorenzo, “The Dilemma of Rapid AI Advancements: Striking a Balance between Innovation and Regulation by Pursuing Risk-Aware Value Creation,” Information, vol. 14, no. 12, p. 646, 2023, doi: https://doi.org/10.3390/info14120645
[2] K. Charmaz, Constructing Grounded Theory: A Practical Guide through Qualitative Analysis. London, Sage Publication, 2006.
[3] A. Rebecca and F. Alan, “Guidelines For Writing a Systematic Review," Nurse Education Today, vol. 125, pp. 1-5, 2023, doi: https://doi.org/10.1016/j.nedt.2023.105803