CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Back To Schedule
Sunday, July 19 • 8:00pm - 9:00pm
P75: The covariance perceptron: theory and application

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Matthieu Gilson, David Dahmen, Ruben Moreno-Bote, Andrea Insabato, Moritz Helias

  LINK GOOGLE MEET: meet.google.com/ebs-fpcg-chu

Figures on pdf "poster" taken from the latest version of the paper (just accepted): https://www.biorxiv.org/content/10.1101/562546v4

 1) Introduction

Many efforts in the study of the brain have focused on representations of stimuli by neurons and learning thereof. Our work [1] demonstrates the potential of a novel learning paradigm for neuronal activity with high variability, where distributed information is embedded in the correlation patterns.

2) Learning theory

We derive a learning rule to train a network to perform an arbitrary operation on spatio-temporal covariances for time series. To illustrate our scheme we use the example of classification where the network is trained to perform an input-output mapping from given sets of input patterns to representative output patterns, one output per input group. This setup is the same as learning activity patterns for the classical perceptron [2], a central concept that has brought many fruitful theories in the fields of neural coding and learning in networks. For that reason, we refer to our classifier as “covariance perceptron”. Compared to the classical perceptron, a conceptual difference is that we base information on the the co-fluctuations of the input time series that result in second-order statistics. In this way, robust information can be conveyed despite a high apparent variability in the activity. This approach is a radical change of perspective compared to classical approaches that typically transform time series into a succession of static patterns where fluctuations are noise. On the technical ground, our theory relies on the multivariate autoregressive (MAR) dynamics, for which we derive the weight update (a gradient descent) such that input covariance patterns are mapped to given objective output covariance patterns.

3) Application to MNIST database

To further explore its robustness, we apply the covariance perceptron to the recognition of objects that move in the visual field by a network of sensory (input) and downstream (output) neurons. We use the MNIST database of handwritten digits 0 to 4. As illustrated in Fig. 1, the traces “viewed” by an input neuron exhibit large variability across presentations. Because we want to identify both the digit identity and its moving direction, covariances of the input time series are necessary. We show that the proposed learning rule can successfully train the network to perform the classification task and robustly generalize to unseen data.

4) Towards distributed spike-based information processing

We envisage future steps that transpose this work to information conveyed by high-orders in the spike trains, to obtain the supervised equivalent of spike- timing-dependent plasticity (STDP).


[1] M Gilson, D Dahmen, R Moreno-Bote, A Insabato, M Helias (accepted in PLoS Comput Biol) The covariance perceptron: A new framework for classification and processing of time series in recurrent neural networks. bioRxiv https://www.biorxiv.org/content/10.1101/562546v4

[2] CM Bishop (2006) Pattern Recognition and Machine Learning. Springer.

avatar for Matthieu Gilson

Matthieu Gilson

post-doc, Jülich Forschunszentrum

Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 20