Towards Biologically Plausible Learning in Artificial Neural Networks
Standard error backpropagation is the most prominent supervised learning algorithm for artificial neural networks, although it has been shown to be biologically implausible. Learning in the brain is local and makes a use of bidirectional flow of information. We propose Universal Bidirectional Activation-based Learning, a novel neural model based on the Contrastive Hebbian learning and recirculation algorithms. Our model extends the existing work as it implements two mutually dependent, yet separate weight matrices for different activation propagation directions and a learning rule with novel hyperparameters that drive the contribution of target-based and self-supervised learning. This allows our model to master qualitatively different tasks such as auto-encoding, denoising, and classification. Our results show that UBAL is comparable with a standard multi-layer perceptron as well as with the related biologically motivated state-of-the-art models. Due to its heteroassociative nature, UBAL is able to generate images of the learned classes as an emergent phenomenon, without being explicitly trained to do so.