Loading…
CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Back To Schedule
Sunday, July 19 • 8:00pm - 9:00pm
P86: Studying neural mechanisms in recurrent neural network trained for multitasking depending on a context signal.

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Cecilia Jarne

meet link

Most biological brains, as well as artificial neural networks, are capable of performing multiple tasks [1]. The mechanisms through which simultaneous tasks are performed by the same set of units are not yet entirely clear. Such systems can be modular or mixed selective through some variable such as sensory stimulus [2,3]. Based on simple tasks studied in our previous work [4], where tasks consist of the processing of temporal stimuli, we build and analyze a simple model that can perform multiple tasks using a contextual signal. We study various properties of our trained recurrent networks, as well as the response of the network to the damage done in connectivity. In this way we are trying to illuminate those mechanisms similar to those that could occur in biological brains associated with multiple tasks.
We use a simple RNN model with three layers: one is the input, the second is the recurrent hidden layer, and the last is the output layer. We focus on the study of networks trained for processing of stimuli as temporal inputs.This work shows some preliminary results training networks to perform tasks with: (1) one input tasks with one context input: Time reproduction and finite-duration oscillation, Fig. 01. (2) Two input tasks with one context input: basic logic gate operation (AND, OR, XOR), Fig 02.
Preliminary results show that it was successfully used Keras and Tensorflow for training RNN with context multitasking (open source code).More units to perform well than one task training, as expected. Regarding the dynamics still, a small set of eigenvalues ​​remain outside the circle and dominate dynamics, as it was obtained for individual tasks [5]. Fix-point and oscillatory states coexist depending on context and input. The oscillatory state remains in a manifold [6]. About damage, it is possible to remove between 10% and 12% of the lowest connections before the learned task deteriorates.


[1] Guangyu Robert Yang, Madhura R. Joglekar, Francis Song, William T. Newsome Xiao-Jing Wang. Task representations in neural networks trained to perform many cognitive tasks. 2019 Nature Neuroscience 22(2). DOI: 10.1038/s41593-018-0310-2

[2] Guangyu Robert Yang, Michael W Cole and Kanaka Rajan. How to study the neural mechanisms of multiple tasks. Current Opinion in Behavioral Sciences 2019, 29:134–143.
https://doi.org/10.1016/j.cobeha.2019.07.001

[3] Rigotti Mattia, Barak Omri, Warden Melissa R, Wang Xiao-Jing, Daw Nathaniel D, Miller Earl K, and Fusi Stefano. The importance of mixed selectivity in complex cognitive tasks. Nature 2013, 497:585.

[4] C. Jarne, R. Laje. A detailed study of recurrent neural networks used to model tasks in the cerebral cortex. https://arxiv.org/abs/1906.01094v3

[5] C. Jarne. The dynamics of Recurrent Neural Networks trained for temporal tasks and the eigenvalue spectrum. https://arxiv.org/abs/2005.13074

[6] Saurabh Vyas,Matthew D. Golub, David Sussillo and Krishna V. Shenoy Annual Review of Neuroscience. Computation Through Neural Population Dynamics.

Speakers
avatar for Cecilia Jarne

Cecilia Jarne

Researcher and Professor, Departement of Science and Technology, National University of Quilmes and CONICET
My main research area is the study of the dynamical aspects of Recurrent Neuronal Networks trained to perform different bio-inspired tasks and decision making. I study training methods, implementations and how different degrees of damages affect trained networks. My second research... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 13