Loading…
CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Back To Schedule
Monday, July 20 • 9:00pm - 10:00pm
P138: Processing Capacity of recurrent spiking networks

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Google Meet link

Tobias Schulte to Brinke
, Fahad Khalid, Renato Duarte, Abigail Morrison
One of the most prevalent characteristics of neurobiological systems is the abundance of recurrent connectivity. Regardless of the spatial scale considered, recurrence is a fundamental design principle and a core anatomical feature, permeating the micro-, meso- and macroscopic levels. In essence, the brain (and, in particular, the mammalian neocortex) can be seen as a large recurrent network of recurrent networks. Despite the ubiquity of these observations, it remains unclear whether recurrence and the characteristics of its biophysical properties correspond to important functional specializations and if so, to what extent. ​ Intuitively, from a computational perspective, recurrence allows information to be propagated in time, i.e. past information reverberates so as to influence online processing, endowing the circuits with memory and sensitivity to temporal structure. However, even in its simpler formulations, the functional relevance and computational consequences of recurrence in biophysical models of spiking networks are not clear or unambiguous and its effects vary depending on the type and characteristics of the system under analysis and the nature of the computational task. Therefore, it would be extremely useful, from both an engineering and a neurobiological perspective, to know to what extent is recurrence necessary for neural computation. ​ In this work, we set out to quantify the extent to which recurrence modulates a circuit's computational capacity, by systematically measuring its ability to perform arbitrary transformations on an input, following [1]. By varying the strength and density of recurrent connections in balanced networks of spiking neurons, we evaluate the effect of recurrence on the complexity of the transformations the circuit can carry out and on the memory it is able to sustain. Preliminary results demonstrates some constraints on recurrent connectivity that optimize its processing capabilities for mappings that involve both linear memory and varying degrees of nonlinearity. ​ Additionally, given that the metric we employ is particularly computationally- heavy (evaluating the system's capacity to represent thousands of target functions), a careful optimization and parallelization strategy is employed, enabling its application to networks of neuroscientific interest. We present a highly scalable and computationally efficient software, which pre-computes the thousands of necessary target polynomial functions for each point in a large combinatorial space, accesses these target functions through an efficient lookup operation, caches functions that need to be called multiple times with the same inputs and optimizes the most compute-intensive hotspots with Cython. In combination with MPI for internode communication this results in a highly scalable and computationally efficient implementation to determine the processing capacity of a dynamical system.

Acknowledgments
The authors gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA at Forschungszentrum Jülich and the technical support of the Simulation Lab Neuroscience.

References
[1] Dambre J, Verstraeten D, Schrauwen B, Massar S. Information Processing Capacity of Dynamical Systems. Sci Rep. 2012, 2. 514.

Speakers
avatar for Tobias Schulte to Brinke

Tobias Schulte to Brinke

Doctoral Researcher, Institute of Neuroscience and Medicine (INM-6), Jülich Research Center



Monday July 20, 2020 9:00pm - 10:00pm CEST
Slot 05