A Similarity-based Normative Framework for Bio-plausible Neural Nets

Anirvan Sengupta1,2

1Flatiron Institute, 162 5th Ave, New York, NY 10010, USA

2Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Rd, Piscataway, NJ 08854, USA

anirvans.physics [at] gmail.com

Abstract

In the last decade, Artificial Neural Nets (ANNs), rebranded as Deep Learning, have revolutionized the field of Artificial Intelligence. While these neural nets have their origin in analogy with the neural networks in the brain, in many ways they are trained in ways that are very different from how real neurons learn. For example, to date there is no satisfactory biologically plausible mechanism for backpropagation, the workhorse for training ANNs.

Motivated by this gap, we have looked at alternative normative approaches to neural networks that could give rise to more plausible learning rules. One such approach, which works rather well for representation learning problems, is based on similarity matching or kernel alignment. In this approach, one demands that similar sensory inputs produce similar neural activities. From this rather limited constraint, one can give rise to interesting neural networks performing many common unsupervised learning tasks. I will illustrate, in particular, the case of representing continuous manifolds like spatial information. Here , this approach produces representations very much like place cells in the hippocampus. Consequences of our theory and its relations to some experiments would be discussed. Time permitting, I would touch upon the role of similarity matching in current work in ANNs as well.

Keywords: neural network, brain representation learning

Acknowledgement: I acknowledge long and fruitful collaborations with Yanis Bahroun, Dmitri Chklovskii, Alexander Genkin, Cengiz Pehlevan, Shagesh Shridharan and Mariano Tepper that have informed my view. This work was partly supported by a grant (SF 626323) to the author from the Simons Foundation.

Comments are closed.