Incorporating LLMs to Reveal Metaphors Phenomena
Abstract
Why should we be bothered with metaphors at all? We as humans possess many phenomena as thinking beings, and metaphor takes up a special place among them all. They help us to build up a knowledge of things around us by making comparisons between incomparable objects from basic categorization.
Basic categorization is quite crucial in forming an understanding of the word around us. Its influence can be tracked down how we operate in the physical environment, how we talk and even how we think.
This leads humans to apply embodied knowledge on abstract ideas. Meaning that abstract things are still grounded on sensory motor perception.
If abstract concepts get their meaning via conceptual metaphor, and if complex conceptual metaphors are made up of primitive conceptual metaphors that get their meaning via embodied experience, then the meaning of concepts comes through embodied cognition [1].
Embodied cognition is a theory that claims that all cognitive functions have a bodily, experiential basis, and that cognition is founded on metaphorical mappings, which transmit perceptual, sensorimotor information towards unimageable, abstract domains [2].
While agreeing that everyone builds up knowledge mainly on embodied cognition, which is in turn quite subjective word-view, then how come people are able to communicate and transmit ideas via paintings, language, gestures etc.?
Here we can make an assumption that everyone is living in their own daydream that is very similar to each other. And the reason why everyone understands each other is that every member of this dream agreed upon certain things. Like agreeing that color “red” to be a certain visual effect that we see daily on items in a shared dream. As we agreed on the assumption that the world around us is purely a sensory-motor simulation, this creates an opportunity to build an artificial intelligence that is capable of sharing the same dream with us. How can we achieve it? We can try to train Large Language Models (LLMs) to understand spatial cognition. Later this model could be trained to generate “grounded” metaphors that would cause other sensory-motor related models to improve. So far we have certain algorithms (Latent Semantic Analysis, Interpretation-based Processing) that help to evaluate similarities of two metaphors operating on a compiled database of corpus [3]. The hidden gem or phenomena of how to implement spatial cognition in ML models is still missing, which we will try to find out with this research.
References:
[1] Lakoff, G. (2014) ‘Mapping the brain’s metaphor circuitry: Metaphorical thought in everyday reason’, Frontiers in Human Neuroscience, 8. doi:10.3389/fnhum.2014.00958.
[2] Forgács, B. (2021) ‘The pragmatic functions of metaphorical language’, Language, Cognition, and Mind, pp. 41–57. doi:10.1007/978-3-030-66175-5_4
[3] Kintsch, W. (2000) Metaphor comprehension: A computational theory - psychonomic bulletin & review, Metaphor comprehension: A computational theory. Available at: https://link.springer.com/article/10.3758/bf03212981.