Generative Traits of Universal Bidirectional Activation-Based Learning


  • Zuzana Halgasova Comenius University in Bratislava



The most prominent supervised learning rule in artificial neural networks, the error backpropagation (BP), is considered biologically implausible. Therefore alternative learning rules were proposed. Since learning in the brain is based on local interactions between presynaptic and postsynaptic neurons, these models use only local activation as well. One of these models is the GeneRec, proposed by O'Reilly. In this model, neuron activation is propagated bidirectionally.

Activation in the brain is backpropagated via separate synaptic weights as opposed to GeneRec, where the same synaptic weights are used [3].

Our Model

Built on similar principles, Universal Bidirectional Activation-based Learning (UBAL) was proposed. It is a bio-plausible alternative to BP. In contrast to GeneRec, UBAL uses separate weight matrices W and M for each direction of activation flow [2]. UBAL enriches the GeneRec with an internal echo mechanism that enables self-supervised learning. It is essentially a heteroassociator and approaches every task, including classification as a bidirectional mapping.  An emergent property of UBAL is that it generates images from the data that it learns to classify, without being trained to do so [2]. These images can be seen as the network's imagination


The model was tested on the most prominent benchmark for classifying -  MNIST, a database of handwritten digits commonly used for training various image processing systems. UBAL's classification succession rate on the test set is about 96%, which is comparable to models. Preliminary results from this database suggest that these images differ among network initializations and are different from the computed averages of all images in the dataset [2].

Hypothesis and Future Research

We hypothesize that images drawn by UBAL can be used to create adversarial examples. Adversarial examples are images designed to fool trained neural networks, created using noise or the error gradients from the network [1]. 

The additional aim is to observe when MNIST adversarial examples are input for UBAL, whether he classifies them correctly or as noise.


[1] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,”, 2014.

[2] K. Malinovská and I. Farkaš, “Generative Properties of Universal Bidirectional Activation-Based Learning,” Lecture Notes in Computer Science, pp. 80–83, 2021, DOI: 10.1007/978-3-030-86365-4_7.

[3] R. C. O’Reilly, “Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm,” Neural Computation, vol. 8, no. 5, pp. 895–938, Jul. 1996, DOI: 10.1162/neco.1996.8.5.895.