Knowledge-Driven Visual Analytics in Deep Learning

Authors

  • Harkeerat Mangat University of Vienna

Abstract

The rapid advancement of deep learning models has elevated artificial intelligence to unprecedented levels, optimising solutions to problems previously considered difficult. As deep learning tools become increasingly integrated into various domains, there is growing interest in knowing how these algorithms work and exploring potential applications. Due to the complex and non-linear structure of these algorithms, even the engineers who create them struggle to retrace their paths to a specific result. This calculation process is commonly referred to as a “black box” because of the difficult challenge in making results interpretable.

Interpretability in machine learning is the degree to which a human can consistently understand the cause of a decision. The higher the interpretability of a model, the easier it is for someone to comprehend why certain decisions or predictions have been made [1]. Visual Analytics has emerged as an approach that combines automated methods with information visualization to increase the interpretability of AI. In recent years, several surveys on visualization systems and Visual Analytics that make deep learning algorithms interpretable have been published in this growing domain of research [2]. 

Currently, most deep learning models are primarily data-driven, whereas knowledge-driven perspectives have received comparatively little attention. An open research opportunity is to combine human knowledge and deep learning techniques through interactive visualization [3]. A promising direction in Visual Analytics is to develop interactive visualization systems that not only make AI interpretable after a model is trained, but also facilitate human knowledge propagation into the model during its training. A human-in-the-loop Visual Analytics system that integrates human knowledge and data-driven learning approaches could both interpret and shape AI in a way that is semantically meaningful and relevant to a user. This approach could pave the way for more personalised interpretations and novel uses of deep learning models through their co-development with users.

The first step of this research project is to study the range of visualization techniques that are used to make deep learning interpretable. The overarching goal of the project is to leverage these findings to develop an interactive visualization system that maintains a human knowledge-driven approach in the interpretation and co-development of deep learning models. By achieving this goal, the research project aims to enhance the interpretability of deep learning models, making them semantically relevant to a user, and to enable innovative user-specific integration of deep learning into applications across various domains.

References

[1] C. Molnar, Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. [Zürich]: Christoph Molnar, 2019.

[2] F. Hohman, M. Kahng, R. Pienta, and D. H. Chau, “Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 8, pp. 2674–2693, Aug. 2019. doi:10.1109/tvcg.2018.2843369

[3] J. Choo and S. Liu, “Visual Analytics for Explainable Deep Learning,” IEEE Comput. Graph. Appl., vol. 38, no. 4, pp. 84–92, Jul. 2018, doi: 10.1109/MCG.2018.042731661

Published

2024-06-10