Loading…
CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Saturday, July 18
 

12:00pm CEST

Characterizing neural dynamics using highly comparative time-series analysis
Ben D. Fulcher

T7: Massive open datasets of neural dynamics, from microscale neuronal circuits to macroscale population-level recordings, are becoming increasingly available to the computational neuroscience community. There are myriad ways to quantify different types of structure in the univariate dynamics of any individual component of a neural system, including methods from statistical time-series modeling, the physical nonlinear time-series analysis literature, and methods derived from information theory. Across this interdisciplinary literature of thousands of time-series analysis methods, each method gives unique information about the measured dynamics. However, the choice of analysis methods in any given study is typically subjective, leaving open the possibility that alternative methods might yield better understanding or performance for a given task.

In this tutorial, I will introduce highly comparative time-series analysis, implemented as the software package hctsa, which partially automates the selection of useful time-series analysis methods from an interdisciplinary library of over 7000 time-series features. I will demonstrate how hctsa can be used to extract useful information from various neural time-series datasets. We will work through a range of applications using fMRI (mouse and human) and EEG (human) time-series datasets, including how to: (i) determine the relationship between structural connectivity and fMRI dynamics in mouse and human; (ii) understand the effects of targeted brain stimulation using DREADDs using mouse fMRI; and (iii) classify seizure dynamics and extract sleep-stage information from EEG.

Tutorial Website​​​

Software tools
[1] If you want to play along at home, you can read the README and install the hctsa software package (Matlab): https://github.com/benfulcher/hctsa
[2] hctsa documentation: https://hctsa-users.gitbook.io/hctsa-manual/

References and background reading
[1] B.D. Fulcher, N. S. Jones. hctsa: A computational framework for automated time-series phenotyping using massive feature extraction. Cell Systems 5(5): 527 (2017). https://doi.org/10.1016/ j.cels.2017.10.001
[2] B.D. Fulcher, M.A. Little, N.S. Jones. Highly comparative time-series analysis: the empirical structure of time series and their methods. J. Roy. Soc. Interface 10, 20130048 (2013). https://doi.org/10.1098/rsif.2013.0048 
  
---Attendence Instructions (ZOOM)---
Topic: CNS 2020 Tutorial: Characterizing neural dynamics using highly comparative time-series analysis 
Time: Jul 18, 2020 08:00 PM Canberra, Melbourne, Sydney
Join URL: https://uni-sydney.zoom.us/j/92853577660?pwd=eWJydWczR3pRUkhkQ05QS3N0bjNIZz09
Password: 743296
Discussion/Questions in this neurostars thread

Speakers
avatar for Ben D. Fulcher

Ben D. Fulcher

Senior Lecturer, School of Physics, The University of Sydney
I like dynamics and time-series analysis, and building and analyzing models of complex systems like the brain.



Saturday July 18, 2020 12:00pm - 3:00pm CEST
Link (T7)

3:00pm CEST

K1: Deep reinforcement learning and its neuroscientific implications
Matthew Botvinick

Neurostars discussion

The last few years have seen some dramatic developments in artificial intelligence research. What implications might these have for neuroscience? Investigations of this question have, to date, focused largely on deep neural networks trained using supervised learning, in tasks such as image classification. However, there is another area of recent AI work which has so far received less attention from neuroscientists, but which may have more profound neuroscientific implications: Deep reinforcement learning. Deep RL offers a rich framework for studying the interplay among learning, representation and decision-making, offering to the brain sciences a new set of research tools and a wide range of novel hypotheses. I'll provide a high level introduction to deep RL, discuss some recent neuroscience-oriented investigations from my group at DeepMind, and survey some wider implications for research on brain and behavior.

Speakers
avatar for Matthew Botvinick

Matthew Botvinick

Director of Neuroscience Research, DeepMind


Saturday July 18, 2020 3:00pm - 4:00pm CEST
Crowdcast
  Keynote
  • Moderator Thomas Novotny; Shirin Dora

4:00pm CEST

The use of Keras with Tensor Flow applied to neural models and data analysis
Cecilia Jarne:

Zoom link (updated July 18th)
Meeting ID: 661 8352 0802
Password: 786716

Video: https://www.youtube.com/watch?v=5mKF6HGOvgs

Slides and excersices in the Tutorial Website


T5: This tutorial will help participants implement and explore simple neural models using Keras [1] as well as the implementation of neural networks to apply Deep learning tools for data analysis. It will include an introduction to modeling and hands-on exercises. The tutorial will focus on using Keras which is an open-source framework to develop Neural Networks for rapid prototyping and simulation with TensorFlow [2] as backend. The tutorial will show how models can be built and explored using python. The hands-on exercises will demonstrate how Keras can be used to rapidly explore the dynamics of the network.

Keras is a framework that greatly simplifies the design and implementations of Neural Networks of many kinds (Regular classifiers, Convolutional Neural Networks, LSTM among others). In this mini-course we will study implementations of neural networks with Keras split into two sections: On one side we will introduce the main features of Keras, showcasing some examples; and in then we will do a set of two guided on-line hands-on with exercises to strengthen the knowledge.

Tutorial Website
For this tutorial, you will need basic knowledge of NumPy, SciPy, and matplotlib. To be able to carry out the tutorial, students need a laptop with Linux and these libraries installed:
  • Python
  • Numpy
  • SciPy
  • Matplotlib
  • Scikit learn
  • TensorFlow
  • Keras
I recommend the following sites where is explained the installation of following packages that include a set of the named libraries and some additional tools:
  • https://www.anaconda.com/distribution/
  • https://www.tensorflow.org/install/
  • https://keras.io/
[1] Francois Chollet et al. Keras. https://keras.io, 2015.
[2] Martín Abadi, et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.

Speakers
avatar for Cecilia Jarne

Cecilia Jarne

Researcher and Professor, Departement of Science and Technology, National University of Quilmes and CONICET
My main research area is the study of the dynamical aspects of Recurrent Neuronal Networks trained to perform different bio-inspired tasks and decision making. I study training methods, implementations and how different degrees of damages affect trained networks. My second research... Read More →


Saturday July 18, 2020 4:00pm - 7:00pm CEST
Link (T5)

4:00pm CEST

Tools and techniques to bridge the gap between models and closed-loop neuroscience experiments
Pablo Varona, Manuel Reyes Sanchez , Rodrigo Amaducci 

Check the website for more information
gnb-uam.github.io/CNS2020-ClosedLoopNeuroscienceTutorial

Join the session
meet.google.com/cgc-wuvm-idx

T3: Models in computational neuroscience are typically used to reproduce and explain experimental findings, to draw new hypotheses from their predictive power, to undertake the low observability of the brain, etc. However, computational models can also be employed to interact directly with living nervous systems, which is a powerful way of unveiling key neural dynamics by combining experimental and theoretical efforts. However, protocols that simultaneously combine recordings from living neurons and input/outputs from computational models are not easy to design or implement. In this tutorial, we will describe several tools and techniques to build such kind of open and closed-loop interactions: from basic dynamic-clamp approaches to build hybrid circuits to more complex configurations that can include several interacting living and artificial elements. We will emphasize the need of open-source real-time software technology for some of these interactions.

In particular, we will focus on two software packages that can implement closed-loop interactions between living neurons and computational neuroscience models. The first one, RTHybrid, is a solution to build hybrid circuits between living neurons and models. This program, developed by the organizers, includes a library of neuron and synapse models and different algorithms for the automatic calibration and adaptation of hybrid configurations. The second software tool, RTXI, allows to program specific modules to implement a wide variety of closed-loop configurations and includes many handy modularization and visualization tools. Both programs can be used in very wide contexts of hybrid experimental design and deal with real-time constraints. During the tutorial, we will show how to install and use these programs in standard computer platforms, and we will provide attendees the possibility of building and testing their first designs.

Software tools
Important: for the practical part of the tutorial, please download beforehand the latest RTXI version from http://rtxi.org/install/. It is not necessary to install it in your computer for this tutorial, you can just create a live-USB and boot from the live image following the instructions on the web or install it on a virtual machine.

 

Also, please download RTHybrid modules for RTXI from https://github.com/GNB-UAM/rthybrid-for-rtxi and install them following the instructions.


Speakers
avatar for Pablo Varona

Pablo Varona

Professor, Grupo de Neurocomputación Biológica. Escuela Politécnica Superior. Universidad Autónoma de Madrid
avatar for Rodrigo Amaducci

Rodrigo Amaducci

PhD Student, Grupo de Neurocomputación Biológica (GNB), Universidad Autónoma de Madrid
avatar for Manuel Reyes-Sanchez

Manuel Reyes-Sanchez

PhD Student, Grupo de Neurocomputacion Biologica, Universidad Autonoma de Madrid
Hybrid circuits, closed-loop, computational neuroscience, machine learning.


Saturday July 18, 2020 4:00pm - 7:00pm CEST
Link (T3)

4:00pm CEST

Building mechanistic multiscale models, from molecules to networks, using NEURON and NetPyNE
 CNS*2020 Tutorial code and comments
NetPyNE Slides
Video Presentation

Salvador Dura-Bernal, Robert A McDougal, William W Lytton 


T2: Understanding brain function requires characterizing the interactions occurring across many temporal and spatial scales. Mechanistic multiscale modeling aims to organize and explore these interactions. In this way, multiscale models provide insights into how changes at molecular and cellular levels, caused by development, learning, brain disease, drugs, or other factors, affect the dynamics of local networks and of brain areas. Large neuroscience data-gathering projects throughout the world (e.g. US BRAIN, EU HBP, Allen Institute) are making use of multiscale modeling, including the NEURON ecosystem, to better understand the vast amounts of information being gathered using many different techniques at different scales.

This tutorial will introduce multiscale modeling using two NIH-funded tools: the NEURON simulator [1], including the Reaction-Diffusion (RxD) module [2,3], and the NetPyNE tool [4]. The tutorial will include background, examples, and hands-on exercises covering the implementation of models at four key scales: (1) intracellular dynamics (e.g. calcium buffering, protein interactions), (2) single neuron electrophysiology (e.g. action potential propagation), (3) neurons in extracellular space (e.g. spreading depression), and (4) networks of neurons. For network simulations, we will use NetPyNE, a high-level interface to NEURON supporting both programmatic and GUI specification that facilitates the development, parallel simulation, and analysis of biophysically detailed neuronal circuits. We conclude with an example combining all three tools that links intracellular molecular dynamics with network spiking activity and local field potentials. Basic familiarity with Python is recommended. No prior knowledge of NEURON or NetPyNE is required, however, participants are encouraged to download and install each of these packages prior to the tutorial.

TUTORIAL WEBSITE

Tools:
Schedule (NY time):

10am - 1pm: NEURON and RxD
1pm - 4pm: NetPyNE (GUI and coding)

See website for detailed schedule: TUTORIAL WEBSITE

References:
  1. Lytton WW, Seidenstein AH, Dura-Bernal S, McDougal RA, Schürmann F, Hines ML. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON. Neural Comput. 28, 2063–2090, 2016.
  2. McDougal R, Hines M, Lytton W. (2013) Reaction-diffusion in the NEURON simulator. Front. Neuroinform. 7, 28. 10.3389/fninf.2013.00028
  3. Newton AJH, McDougal RA, Hines ML and Lytton WW (2018) Using NEURON for Reaction-Diffusion Modeling of Extracellular Dynamics. Front. Neuroinform. 12, 41. 10.3389/fninf.2018.00041
  4. Dura-Bernal S, Suter B, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, Kedziora DJ, Chadderdon GL, Kerr CC, Neymotin SA, McDougal R, Hines M, Shepherd GMG, Lytton WW. (2019) NetPyNE: a tool for data-driven multiscale modeling of brain circuits. eLife 2019;8:e44494

Speakers
avatar for Robert McDougal

Robert McDougal

Assistant Professor, Yale University, USA
I'm an Assistant Professor in the Health Informatics division of Biostatistics, and a developer for NEURON and ModelDB. Computationally and mathematically, I'm interested in dynamical systems modeling and applications of machine learning and NLP to gain insights into the nervous system... Read More →
avatar for William W Lytton

William W Lytton

Professor, SUNY Downstate, USA
avatar for Joe Graham

Joe Graham

Research Scientist, SUNY Downstate, USA
avatar for Salvador Dura-Bernal

Salvador Dura-Bernal

Assistant Professor, State University of New York (SUNY) Downstate



Saturday July 18, 2020 4:00pm - 10:00pm CEST
Link (T2)

4:00pm CEST

New interfaces for teaching with NEST: hands-on with the NEST Desktop GUI and NESTML code generation
Charl Linssen, Sebastian Spreizer, Renato Duarte 

T1: NEST is established community software for the simulation of spiking neuronal network models capturing the full detail of biological network structures [1]. The simulator runs efficiently on a range of architectures from laptops to supercomputers [2]. Many peer-reviewed neuroscientific studies have used NEST as a simulation tool over the past 20 years. More recently, it has become a reference code for research on neuromorphic hardware systems [3].
This tutorial provides hands-on experience with recent improvements of NEST. In the past, starting out with NEST could be challenging for computational neuroscientists, as models and simulations had to be programmed using SLI, C++ or Python. NEST Desktop changes this: It is an entirely graphical approach to the construction and simulation of neuronal network models. It runs installation-free in the browser and has proven its value in several university courses. This opens the domain of NEST to the teaching of neuroscience for students with little programming experience.
NESTML complements this new interface by enhancing the development process of neuron and synapse models. Advanced researchers often want to study specific features not provided by models already available in NEST. Instead of having to turn to C++, using NESTML they can write down differential equations and necessary state transitions in the mathematical notation they are used to. These descriptions are then automatically processed to generate machine-optimised code.
After a quick overview of the current status of NEST and upcoming new functionality, the tutorial works through a concrete example [4] to show how the combination of NEST Desktop and NESTML can be used in the modern workflow of a computational neuroscientist.

Tutorial Website​​​

Video stream link

The tutorial session will take place at the following link:
https://rwth.zoom.us/j/92601423152?pwd=QkpKRWNuQy9TVTJjODlsVVp0NUEwZz09
Meeting-ID: 926 0142 3152
Password: 6vp&Yh

For more instructions please see our tutorial website.

References
  1. Gewaltig M-O & Diesmann M (2007) NEST (Neural Simulation Tool) Scholarpedia 2(4):1430
  2. Jordan J., Ippen T., Helias M., Kitayama I., Sato M., Igarashi J., Diesmann M., Kunkel S. (2018) Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Frontiers in Neuroinformatics 12: 2
  3. Gutzen R., von Papen, M., Trensch G., Quaglio P. Grün S., Denker M. (2018) Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data. Frontiers in Neuroinformatics 12 (90)
  4. Duarte R. & Morrison A. (2014). “Dynamic stability of sequential stimulus representations in adapting neuronal networks”, Front. Comput. Neurosci.

Speakers
avatar for Sebastian Spreizer

Sebastian Spreizer

PostDoc, Trier University, Germany
Developer of the educational web-application `NEST Desktop` (https://nest-desktop.readthedocs.io/en/latest/)Currently, I am looking for a job and I am interesting in developing front-end application in scientific fields.Please contact me if you know about open position of such job.Many... Read More →
avatar for Charl Linssen

Charl Linssen

Jülich Research Centre, Germany
avatar for Renato Duarte

Renato Duarte

Postdoctoral Researcher, Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Center


Saturday July 18, 2020 4:00pm - 10:00pm CEST
Link (T1)

7:00pm CEST

Methods from Data Science for Model Simulation, Analysis, and Visualization
Cengiz Gunay, Anca Doloc-Mihu

T6: Computational neuroscience projects often involve a large number of simulations for parameter search of computer models, which generates a large amount of data. With the advances in computer hardware, software methods, and cloud computing opportunities making this task easier, the amount of collected data has exploded, similar to what has been happening in many fields. High-performance computing (HPC) methods have been used in the computational neuroscience field for a while. However, the use of novel data science and big data methods are less frequent. In this tutorial, we will review established HPC methods and introduce novel data science tools to be used in computational neuroscience workflows, starting from the industry standard of Apache Hadoop (https://hadoop.apache.org/) to newer tools, such as Apache Spark (https://spark.apache.org/). These tools can be used for either model simulation or post-processing and analysis of the generated data. To visualize the data, we will review novel web-based interactive dashboard technologies mostly based on Javascript and Python.


Discussion Link on NeuroStars for comments

Tutorial Website with links to slides and other materials

Zoom Meeting
Link: https://us02web.zoom.us/j/82504212352
Meeting ID: 825 0421 2352
Password: 758607

Feedback form, please give us some feedback after the tutorial! Thanks :)

  Session 1: Spark Server Info:
  • IP address: (taken offline)
  • Look for download files here spark-key (for Mac and Linux) and spark-windows.ppk (for Windows)

Speakers
avatar for Cengiz Gunay

Cengiz Gunay

Associate Professor, Georgia Gwinnett College
avatar for Anca Doloc-Mihu

Anca Doloc-Mihu

Assistant Professor, Georgia Gwinnett College



Saturday July 18, 2020 7:00pm - 10:00pm CEST
Link (T6)

7:00pm CEST

Neuromorphic VLSI realization of the Hippocampal formation
Anu Aggarwal, Tianhua Xia

T4: Neuromorphic circuits are inspired by the organizing principles of biological neural circuits. These designs implement the computational neuroscience models of different parts of the brain in silicon. These silicon devices can perform actual work unlike the computer models. One of the main reasons for interest in this field is that the electrical and computer engineers wish to implement the superior processing powers of the brain to build machines like computers. For similar processing power, brain consumes much less power than a computer. Thus, scientists are interested in building power-efficient machines that are based on brain algorithms. Neuromorphic architectures often rely on collective computation in parallel networks. Adaptation, learning and memory are implemented locally within the individual computational elements as opposed to separation between memory and computations in conventional computers. As the Moore’s law has hit the limits, there is interest in brain-inspired computing to build small, and power efficient computing machines. Application domains of neuromorphic circuits include silicon retinas, cochleas for machine vision and audition, real-time emulations of networks of biological neurons, the lateral superior olive and hippocampal formation for the development of autonomous robotic systems and even replacement of brain neuronal functions with silicon neurons. This tutorial covers introduction to silicon Neuromorphic design with example of silicon implementation of the hippocampal formation.

Tutorial Website
Presentations/Lectures
link to presentation recording
https://illinois.zoom.us/rec/share/vP0qJZbL8DxLU43t-njVdqA-GaTAX6a8hnAZrqdcy0ZiLd-MVBLugPxnHPsHxO-_
  1. Brief background of Neuromorphic VLSI design, anatomy,  physiology (including lab experimental data) and computational neuroscience models of the Hippocampal formation
  2. Circuit design : Active and passive elements introduction
  3. VLSI design or silicon realization of the Hippocampal formation
Background readings (not required)
1. Analog VLSI and Neural systems by Carver Mead, 1989
2. J. O’Keefe, 1976, “Place units in the hippocampus of the freely moving rat”, Exp. Neurol. 51, 78-109.
3. J. S. Taube, R. U. Muller, J. B Ranck., Jr., 1990a, “Head direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis”, J Neurosci., 10, 420-435.
4. J. S. Taube, R. U. Muller, J. B Ranck., Jr., 1990b, “Head direction cells recorded from the post-subiculum in freely moving rats. II. Effects of environmental manipulations”, J Neurosci., 10, 436-447.
5. T. Hafting, M. Fyhn, S. Molden, M. B. Moser., E. I. Moser, August 2005, “Microstructure of a spatial map in the entorhinal cortex”, Nature, 436, 801-806.
6. B. L. McNaughton, F. P. Battaglia, O. Jensen, E. I. Moser & M. B. Moser, 2006, “Path integration and the neural basis of the 'cognitive map‘”, Nature Reviews Neuroscience, 7, 663-678.
7. H. Mhatre, A. Gorchetchnikov, and S. Grossberg, 2012, “Grid Cell Hexagonal Patterns Formed by Fast Self-Organized Learning within Entorhinal Cortex”, Hippocampus, 22:320–334.T. Madl, S. Franklin, K. Chen, D. Montaldi, R. Trappl, 2014, “Bayesian integration of information in hippocampal place cells”, PLOS one, 9(3), e89762.
8. Aggarwal, 2015, "Neuromorphic VLSI Bayesian integration synapse", the Electronics letters, 51(3):207-209.
9. A.Aggarwal, T. K. Horiuchi, 2015, “Neuromorphic VLSI second order synapse”, the Electronics letters, 51(4):319-321.
10. A.Aggarwal, 2015, “VLSI realization of neural velocity integrator and central pattern generator”, the Electronics letters, 51(18), DOI: 10.1049/el.2015.0544.
11. A.Aggarwal, 2016, “Neuromorphic VLSI realization of the Hippocampal Formation”, Neural Networks, May; 77:29-40. doi: 10.1016/j.neunet.2016.01.011. Epub 2016 Feb 4.

Join meeting here
https://illinois.zoom.us/j/95994782367?pwd=RU4xWTAyWlRMemNiSWs1TGM3MVMvQT09

Meeting ID: 959 9478 2367
Password: CNS 2020

One tap mobile
+13126266799,,95994782367# US (Chicago)
+13017158592,,95994782367# US (Germantown)

Dial by your location
        +1 312 626 6799 US (Chicago)
        +1 301 715 8592 US (Germantown)
        +1 470 250 9358 US (Atlanta)
        +1 470 381 2552 US (Atlanta)
        +1 646 518 9805 US (New York)
        +1 651 372 8299 US (St. Paul)
        +1 786 635 1003 US (Miami)
        +1 929 205 6099 US (New York)
        +1 267 831 0333 US (Philadelphia)
        +1 253 215 8782 US (Tacoma)
        +1 346 248 7799 US (Houston)
        +1 602 753 0140 US (Phoenix)
        +1 669 219 2599 US (San Jose)
        +1 669 900 6833 US (San Jose)
        +1 720 928 9299 US (Denver)
        +1 971 247 1195 US (Portland)
        +1 213 338 8477 US (Los Angeles)
        +1 647 558 0588 Canada
        +1 778 907 2071 Canada
        +1 438 809 7799 Canada
        +1 587 328 1099 Canada
        +1 647 374 4685 Canada
        +49 30 5679 5800 Germany
        +49 695 050 2596 Germany
        +49 69 7104 9922 Germany
        +82 2 6105 4111 Korea, Republic of
        +82 2 6022 2322 Korea, Republic of
        +44 131 460 1196 United Kingdom
        +44 203 481 5237 United Kingdom
        +44 203 481 5240 United Kingdom
        +81 524 564 439 Japan
        +81 3 4578 1488 Japan
        +61 2 8015 6011 Australia
        +61 3 7018 2005 Australia
        +61 8 7150 1149 Australia
        +52 554 161 4288 Mexico
        +52 229 910 0061 Mexico
        +65 3165 1065 Singapore
        +65 3158 7288 Singapore
Meeting ID: 959 9478 2367
Password: 98433008
Find your local number: https://illinois.zoom.us/u/abfpcIwI

Join by SIP
95994782367@zoomcrc.com

Join by H.323
162.255.37.11 (US West)
162.255.36.11 (US East)
221.122.88.195 (China)
115.114.131.7 (India Mumbai)
115.114.115.7 (India Hyderabad)
213.19.144.110 (EMEA)
103.122.166.55 (Australia)
209.9.211.110 (Hong Kong SAR)
64.211.144.160 (Brazil)
69.174.57.160 (Canada)
207.226.132.110 (Japan)
Meeting ID: 959 9478 2367
Password: 98433008

Join by Skype for Business
https://illinois.zoom.us/skype/95994782367

Speakers
AA

Anu Aggarwal

Assistant Professor, Electrical and Computer Engineering University of Illinois Urbana Champaign, USA
Neuromorphic VLSI design



Saturday July 18, 2020 7:00pm - 10:00pm CEST
Link (T4)

10:00pm CEST

Discussion with Matt Botvinick
Open discussion with Matthew Botvinick. Ask your questions that remained unanswered during the Keynote talk.

Speakers
avatar for Matthew Botvinick

Matthew Botvinick

Director of Neuroscience Research, DeepMind


Saturday July 18, 2020 10:00pm - 11:00pm CEST
Crowdcast
  Keynote Speaker Forum
  • Moderator Thomas Novotny; Shirin Dora

11:00pm CEST

Information theory and directed network inference (using JIDT and IDTxl)
Leonardo Novelli, Joseph T. Lizier 

Tutorial Website (with additional resources, links to slides etc)

S1: Information theoretic measures including transfer entropy are widely used to analyse neuroimaging time series and to infer directed connectivity [1]. The JIDT [2] and IDTxl [3] software toolkits provide efficient measures and algorithms for these applications:
  • JIDT (https://github.com/jlizier/jidt) provides a fundamental computation engine for efficient estimation of information theoretic measures for a variety of applications. It can be easily used in Matlab, Python, and Java, and provides a GUI interface for push-button analysis and code template generation.
  • IDTxl (https://github.com/pwollstadt/IDTxl) is a specific Python toolkit for directed network inference in neuroscience. It employs multivariate transfer entropy and hierarchical statistical tests to control false positives and has been validated at realistic scales for neural data sets [4]. The inference can be run in parallel using GPUs or a high-performance computing cluster.
This tutorial session will help you get started with software analyses via brief overviews of the toolkits and demonstrations.

Tutorial Website

Neurostars forum for Q&A

References
  1. Wibral, M., Vicente, R., & Lizier, J. T. (2014). Directed Information Measures in Neuroscience. Springer, Berlin. https://doi.org/10.1007/978-3-642-54474-3
  2. Lizier, J. T. (2014). JIDT: An Information-Theoretic Toolkit for Studying the Dynamics of Complex Systems. Frontiers in Robotics and AI, 1, 11. https://doi.org/10.3389/frobt.2014.00011
  3. Wollstadt, P., Lizier, J. T., Vicente, R., Finn, C., Martinez-Zarzuela, M., Mediano, P., Novelli, L., and Wibral, M. (2019). IDTxl: The Information Dynamics Toolkit xl: a Python package for the efficient analysis of multivariate information dynamics in networks. Journal of Open Source Software, 4(34), 1081. https://doi.org/10.21105/joss.01081
  4. Novelli, L., Wollstadt, P., Mediano, P., Wibral, M., & Lizier, J. T. (2019). Large-scale directed network inference with multivariate transfer entropy and hierarchical statistical testing. Network Neuroscience, 3(3), 827–847. https://doi.org/10.1162/netn_a_00092
Link to software tools:

Speakers
avatar for Joseph Lizier

Joseph Lizier

Associate Professor, Centre for Complex Systems, The University of Sydney
My research focusses on studying the dynamics of information processing in biological and bio-inspired complex systems and networks, using tools from information theory such as transfer entropy to reveal when and where in a complex system information is being stored, transferred and... Read More →
avatar for Leonardo Novelli

Leonardo Novelli

PhD Student, Centre for Complex Systems, The University of Sydney


Saturday July 18, 2020 11:00pm - 11:30pm CEST
Crowdcast

11:30pm CEST

Introduction to the Brain Dynamics Toolbox
Stewart Heitmann 

S2: The Brain Dynamics Toolbox (https://bdtoolbox.org) is an open-source toolbox for simulating dynamical systems in neuroscience using Matlab. It specifically solves initial-value problems in user-defined systems of Ordinary Differential Equations (ODEs), Delay Differential Equations (DDEs), Stochastic Differential Equations (SDEs) and Partial Differential Equations (PDEs). New models can typically be written in less than 100 lines of code and then applied at all stages of the research lifecycle. Rapid prototyping is done via the graphical interface where the dynamics can be explored interactively without the need for graphical programming. Interactive parameter surveys can then be semi-automated using the Matlab command window. Large-scale simulations can be fully-automated in user-defined scripts.

Once a model is written, the toolbox’s hub-and-spoke architecture allows unlimited combinations of plotting tools (display panels) and solver algorithms to be applied to that model with no additional programming effort. The toolbox currently supports a dozen solvers and display panels. It also ships with approximately 30 example models that can be used for teaching or as starting points for building new models. Online training courses are available from the bdtoolbox.org website. Extensive documentation is provided in the Handbook for the Brain Dynamics Toolbox.

This software showcase aims to introduce the toolbox to a wider audience through a series of real-time demonstrations. The audience will learn how to get started with the toolbox, how to run existing models and how to semi-automate the controls to generate a bifurcation diagram.

Background reading and software tools
  1. https://bdtoolbox.org•
  2. Heitmann S, Breakspear M (2017-2019) Handbook for the Brain Dynamics Toolbox. QIMR Berghofer Medical Research Institute. 1st Edition: Version 2017c, ISBN 978-1-5497-2070-3. 2nd Edition: Version 2018a, ISBN 978-1-9805-7250-3. 3rd Edition: Version 2018b, ISBN 978-1-7287-8188-4. 4th Edition: Version 2019a, ISBN 978-1-0861-1705-9.
  3. Heitmann S, Aburn M, Breakspear M (2017) The Brain Dynamics Toolbox for Matlab. Neurocomputing. Vol 315. p82-88. doi:10.1016/j.neucom.2018.06.026.

Speakers
avatar for Stewart Heitmann

Stewart Heitmann

Snr Staff Scientist, Victor Chang Cardiac Research Institute, Australia
Computational Scientist at the Victor Chang Cardiac Research Institute and author of the Brain Dynamics Toolbox (bdtoolbox.org). I use computer models to study the role of travelling waves in excitable tissue (heart and brain).



Saturday July 18, 2020 11:30pm - Sunday July 19, 2020 12:00am CEST
Crowdcast
 
Sunday, July 19
 

12:00am CEST

Advances in the PANDORA Matlab Toolbox for intracellular electrophysiology data
Cengiz Gunay

S3: PANDORA is an open-source toolbox for Matlab (Mathworks, Natick, MA) has been proposed for analysis and visualization of single-unit intracellular electrophysiology data (RRID: SCR_001831, Günay et al. 2009 Neuroinformatics, 7(2):93-111. doi: 10.1007/s12021-009-9048-z). Even though there are more modern and popular environments, such as the Python and Anaconda ecosystem, Matlab still offers an advantage in its simplicity, especially towards those less computationally inclined, for instance for collaboration with experimentalists. PANDORA was originally intended for managing and analyzing brute-force neuronal parameter search databases (Günay et al. 2008 J Neurosci. 28(30): 7476-7491; Günay et al. 2010 J Neurosci. 30: 1686–98). However, it has been proven useful for other types of simulation or experimental data analysis (Doloc-Mihu et al. 2011 Journal of biological physics37(3), 263–283. doi:10.1007/s10867-011-9215-y; Lin et al. 2012 J Neurosci 32(21): 7267–77; Wolfram et al. 2014 J Neurosci, 34(7): 2538–2543; doi: 10.1523/JNEUROSCI.4511-13.2014; Günay et al. 2015 PLoS Comp Bio. doi: 10.1371/journal.pcbi.1004189; Wenning et al. 2018 eLife 2018;7:e31123 doi: 10.7554/eLife.31123; Günay et al. 2019 eNeuro6(4), ENEURO.0417-18.2019. doi:10.1523/ENEURO.0417-18.2019). PANDORA’s original motivation was to offer object-oriented analysis specific to neuronal data inside the Matlab environment, in particular with a database table-like object, similar to R and the Python PANDAS toolbox’s “dataframe” object, and a new syntax for a powerful database querying system. The typical workflow would constitute of generating parameter sets for simulations, and then in the resulting output data, finding spikes and additional characteristics to construct databases, and finally analyze and visualize these database contents. PANDORA provides objects for loading datasets, controlling simulations, importing/exporting data, and visualization. Since it’s inception, it has grown with added functionality. In this showcase, we review the toolbox’s standard features and show how to customize them for a given project, and then introduce some of the new and experimental features, such as ion channel fitting, evolutionary/genetic algorithms. Furthermore, we will give a developers’ perspective for those who may be interested in adding modules to this toolbox.

Showcase Website​​​ with slides

Discussion page on Neurostars for comments and questions

Feedback survey

Speakers
avatar for Cengiz Gunay

Cengiz Gunay

Associate Professor, Georgia Gwinnett College


Sunday July 19, 2020 12:00am - 12:30am CEST
Crowdcast

1:00pm CEST

F1: Delineating Reward/Avoidance Decision Process in the Impulsive-compulsive Spectrum Disorders through a Probabilistic Reversal Learning Task
Xiaoliu Zhang, Chao Suo, Amir Dezfouli, Ben J.Harrison, Leah Braganza, Ben Fulcher, Lenardo Fontenelle, Carsten Murawski, Murat Yuceli

Discussion on Neurostars

Impulsivity and compulsivity are behavioural traits that underlie many aspects of decision-making and form the characteristic symptoms of Obsessive Compulsive Disorder (OCD) and Gambling Disorder (GD). The neural underpinnings of aspects of reward and avoidance learning under the expression of these traits and symptoms are only partially understood.

The present study combined behavioural modelling and neuroimaging technique to examine brain activity associated with critical phases of reward and loss processing in OCD and GD.

Forty-two healthy controls (HC), forty OCD and twenty-three GD participants were recruited in our study to complete a two-session reinforcement learning (RL) task featuring a “probability switch (PS)” with imaging scanning. Finally, 39 HC (20F/19M, 34 yrs ±9.47), 28 OCD (14F/14M, 32.11 yrs ±9.53) and 16 GD (4F/12M, 35.53yrs ±12.20) were included with both behavioural and imaging data available. The functional imaging was conducted by using 3.0-T SIEMENS MAGNETOM Skyra syngo MR D13C at Monash Biomedical Imaging. Each volume compromised 34 coronal slices of 3 mm thickness with 2000 ms TR and 30 ms TE. A total of 479 volumes were acquired for each participant in each session in an interleaved-ascending manner.

The standard Q-learning model was fitted to the observed behavioural data and the Bayesian model was used for the parameter estimation. Imaging analysis was conducted using SPM12 (Welcome Department of Imaging Neuroscience, London, United Kingdom) in the Matlab (R2015b) environment. The pre-processing commenced with the slice timing, realignment, normalization to MNI space according to T1-weighted image and smoothing with a 8 mm Gaussian kernel.

The frontostriatal brain circuit including the _putamen and_ _medial orbitofrontal (_ _mOFC_ _)_ were significantly more active in response to receiving reward and avoiding punishment compared to receiving an aversive outcome and missing reward at _p < 0.001_ with FWE correction at cluster level; While the _right insula_ showed greater activation in response to missing rewards and receiving punishment. Compared to healthy participants, GD patients showed significantly lower activation in the _left superior frontal_ and _posterior cingulum_ at _p < 0.001_ for the gain omission.

The reward prediction error (PE) signal was found positively correlated with the activation at several clusters expanding across cortical and subcortical region including _the striatum, cingulate, bilateral insula, thalamus and superior frontal_ at _p < 0.001_ with FWE correction at cluster level. The GD patients showed a trend of decreased reward PE response in the _right precentral_ extending to _left posterior cingulate_ compared to controls at _p < 0.05_ with FWE correction. The aversive PE signal was negatively correlated with brain activity in regions including _bilateral thalamus, hippocampus, insula and striatum_ at _p < 0.001_ with FWE correction. Compared with the control group, GD group showed an increased aversive PE activation in the cluster encompassing _right thalamus_ and _right hippocampus_ , and also the _right middle frontal_ extending to the _right anterior cingulum_ at _P < 0.005_ with FWE correction.

Through the reversal learning task, the study provided a further support of the dissociable brain circuits for distinct phases of reward and avoidance learning. Also, the OCD and GD is characterised by aberrant patterns of reward and avoidance processing.

Speakers
avatar for Xiaoliu Zhang

Xiaoliu Zhang

Monash Biomedical Imaging, Monash University


Sunday July 19, 2020 1:00pm - 1:40pm CEST
Crowdcast
  Featured Talk, Learning and Dynamics
  • Moderator Paul Tiesinga; Tom Burns; R. Janaki

1:40pm CEST

O1: Dopamine role in learning and action inference
Rafal Bogacz

Neurostars topic for Q&A

Much evidence suggests that some dopaminergic neurons respond to unexpected rewards, and computational models have suggested that these neurons encode reward prediction error, which drives learning about rewards. However, these models do not explain recently observed diversity of dopaminergic responses, and dopamine function in action planning, evident from movement difficulties in Parkinson’s disease. The presented work aims at extending existing models to account for these data. It proposes that a more complete description of dopaminergic activity can be achieved by combining reinforcement learning with elements of other recently proposed theories including active inference.

The presented model describes how the basal ganglia network infers actions required to obtained reward using Bayesian inference. The model assumes that a likelihood of reward given action in encoded by the goal-directed system, while the prior probability of making a particular action in a given context is provided by the habit system. It is shown how the inference of the optimal action can be achieved through minimization of free-energy, and how this inference can be implemented in a network with an architecture bearing a striking resemblance to the known anatomy of the striato-dopaminergic circuit. In particular, this network includes nodes encoding prediction errors, which are connected with other nodes in the network in a way resembling the “ascending spiral” structure of dopaminergic connections.

In the proposed model, dopaminergic neurons projecting to different parts of the striatum encode errors in predictions made by the corresponding systems within the basal ganglia. These prediction errors are equal to differences between rewards and expectations in the goal-directed system, and to differences between the chosen and habitual actions in the habit system. The prediction errors enable learning about rewards resulting from actions and habit formation. During action planning, the expectation of reward in the goal-directed system arises from formulating a plan to obtain that reward. Thus dopaminergic neurons in this system provide feedback on whether the current motor plan is sufficient to obtain the available reward, and they facilitate action planning until a suitable plan is found. Presented models account for dopaminergic responses during movements, effects of dopamine depletion on behaviour, and make several experimental predictions.

The full paper describing this work is available at: https://elifesciences.org/articles/53262

Speakers
RB

Rafal Bogacz

MRC Brain Network Dynamics Unit, University of Oxford


Sunday July 19, 2020 1:40pm - 2:00pm CEST
Crowdcast
  Oral, Learning and Dynamics
  • Moderator Paul Tiesinga; Tom Burns; R. Janaki

2:00pm CEST

O2: Neural Manifold Models for Characterising Brain Circuit Dynamics in Neurodegenerative Disease
Seigfred Prado, Simon R Schultz, Mary Ann Go

Neurostars topic for Q&A

Although much is known about neural circuits and molecular pathways required for normal hippocampal functions, the processes by which neurodegenerative diseases, such as Alzheimer’s Disease (AD), disable the functioning of the hippocampus and connected structures remain to be determined. In order to make substantial advances in the treatment of such diseases, we must improve our understanding of how neural circuits process information and how they are disrupted during the progression of these diseases. Recent advances in optical imaging technologies that allow simultaneous recording of large populations of neurons in deeper structures [1] have shown great promise for revealing circuit dynamics during memory tasks [2]. However, to date, no study has revealed how large numbers of neurons in hippocampal-cortical circuits act together to encode, store and retrieve memories in animal models of AD. In this work, we explored the use of neural manifold analysis techniques to characterising brain circuit dynamics in neurodegenerative disease. To understand more precisely the basis of memory and cognitive impairments in AD, we extracted the underlying neural manifolds in large-scale neural responses of hippocampal circuits involved in spatial cognition of behaving mice. For validation, we simulated a model that generates a set of data that mimics the neural activity of hippocampal cells of mouse models running on a linear circular track, while taking into account the effects of amyloid-beta plaques on circuit dynamics [3]. We compare our model with real data obtained by multiphoton imaging of hippocampal CA1 cells in mice engaged in a spatial memory task. We used recurrence analysis to show how neural manifolds evolve over time during memory encoding, storage and recall processes in a repetitive memory task. This work will help with understanding how amyloid-beta proteins affect the neural manifolds for spatial memory, which is particularly disturbed during AD.

Speakers
SP

Seigfred Prado

Department of Bioengineering, Imperial College London


Sunday July 19, 2020 2:00pm - 2:20pm CEST
Crowdcast
  Oral, Learning and Dynamics
  • Moderator Paul Tiesinga; Tom Burns; R. Janaki

2:20pm CEST

O3: Coupled experimental and modeling representation of the mechanisms of epileptic discharges in rat brain slices
Anton Chizhov, Dmytry Amakhin, Elena Smirnova, Aleksey Zaitsev

Neurostars topic for Q&A

Epileptic seizures and interictal discharges (IIDs) are determined by neuronal interactions and ionic dynamics and thus help to reveal valuable knowledge about the mechanisms of brain functioning in not only pathological but also normal state. As synchronized pathological discharges are much simpler to study than normal functioning, we were able to accomplish their description with a set of electrophysiological evidences constrained by a biophysical mathematical model. In the combined hippocampal-entorhynal cortex slices of rat in high potassium, low magnesium and 4-AP containing solution we evaluated separate AMPA, NMDA and GABA-A conductances for different types of IIDs, using an original experimental technique [1]. The conductances have shown that the first type of the discharges (IID1) is determined by activity of only GABA-A channels due to their pathologically depolarized reversal potential. The second type (IID2) is determined by an early GABA-A followed by AMPA and NMDA components. The third type is pure glutamatergic discharges observed in case of disinhibition. Our mathematical model of interacting neuronal populations reproduces the recorded synaptic currents and conductances for IIDs of the three types [2,3], confirming the major role of interneuron synchronization for IID1 and IID2, and revealing that the duration of IIDs is determined mainly by synaptic depression. IIDs occur spontaneously and propagate as waves with a speed of about a few tens of mm/s [4]. IDs are clusters of IID-like discharges and are determined by the ionic dynamics [5]. To reveal only major processes, main ions and variables, we have formulated a reduced mathematical model “Epileptor-2”, which is a minimal model that reproduces both IDs and IIDs [6] (Fig. 1). It shows that IIDs are spontaneous bursts that are governed by the membrane depolarization and synaptic resource, whereas IDs represent bursts of bursts. Important is the role of the Na/K-ATPhase. Potassium accumulation governs the onset of each ID. The sodium accumulates during the ID and activates the sodium-potassium pump, which terminates the ID by restoring the potassium gradient and thus repolarizing the neurons. A spatially-distributed version of the Epileptor-2 model reveals that it is not extracellular potassium diffusion but synaptic connectivity determines the speed of the ictal wavefront [7], which is consistent with our optogenetic experiments. The revealed factors are to be potential targets for antiepileptic medical treatment.

This work was supported by the Russian Science Foundation (project 16-15-10201).

References:
  1. Amakhin DV,  Ergina JL,  Chizhov AV, Zaitsev AV. Synaptic conductances during interictal discharges in pyramidal neurons of rat entorhinal cortex. Front. In Cell. Neurosc. 10:233, 2016.
  2. Chizhov A, Amakhin D, Zaitsev A. Computational model of interictal discharges triggered by interneurons. PLoS ONE 12(10):e0185752, 2017.
  3. Chizhov AV, Amakhin DV, Zaizev AV, Magazanik LG. AMPAR-mediated Interictal Discharges in Neurons of Entorhinal Cortex: Experiment and Model. Dokl Biol Sci. 479(1): 47-50, 2018. doi: 10.1134/S0012496618020011.
  4. Chizhov AV, Amakhin DV, Zaitsev AV. Spatial propagation of interictal discharges along the cortex. Biochem Biophys Res Commun. 508(4):1245-1251, 2019.
  5. Chizhov AV, Amakhin DV, Zaitsev AV. Mathematical model of Na-K-Cl homeostasis in ictal and interictal discharges. PLOS ONE. 2019;14(3):e0213904. doi:10.1371/journal.pone.0213904.
  6. Chizhov AV, Zefirov AV, Amakhin DV, Smirnova EY, Zaitsev AV. Minimal model of interictal and ictal discharges “Epileptor-2”. PLoS Comp. Biol. 14(5): e1006186, 2018. 31.
  7. Chizhov AV, Sanin AE (2020) A simple model of epileptic seizure propagation: Potassium diffusion versus axo-dendritic spread. PLoS ONE 15(4): e0230787.
Epileptor-2 code: https://senselab.med.yale.edu/modeldb/ShowModel?model=263074#tabs-2
Epileptor-2 online: http://www.ioffe.ru/CompPhysLab/MyPrograms/Epileptor-2/Epileptor-2.html

Speakers
avatar for Anton Chizhov

Anton Chizhov

senior researcher, Ioffe Institute
I graduated from Polytechnical University in St. Petersburg, got my Ph. D. in fluid dynamics, was a postdoc in Japan in fluid dynamics and in France in neurophysics. Now I am working in two institutes, the Ioffe Physical-Technical Institute and the Sechenov Institute of Evolutionary... Read More →



Sunday July 19, 2020 2:20pm - 2:40pm CEST
Crowdcast
  Oral, Learning and Dynamics
  • Moderator Paul Tiesinga; Tom Burns; R. Janaki

3:00pm CEST

K2: A new computational framework for understanding vision in our brain
Zhaoping Li

Neurostars discussion

Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which creates a bottom-up saliency map to guide the fovea to selected visual locations via gaze shifts. This motivates a new framework that views vision as consisting of encoding, selection, and decoding stages, placing selection on center stage. It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better decoding (recognition), through analysis-by- synthesis, should query for additional information and be mainly directed at the foveal region. Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions.

To prepare/preview, go to 
http://www.lizhaoping.org/zhaoping/NewPathPaperEtc_2019.html
YouTube lectures https://www.youtube.com/playlist?list=PLbG9iu2mq65-Vmo9VRtkh9AXJ2Ekfrqtk

www.lizhaoping.org

Speakers
avatar for Li Zhaoping

Li Zhaoping

Prof. and head of department, University of Tuebingen, Germany
More info:   Bio: http://www.lizhaoping.org/zhaoping/bio.html  Positions in my group: http://www.lizhaoping.org/jobs.html  publications: http://www.lizhaoping.org/zhaoping/allpaper.html  List of other video lectures: http://www.lizhaoping.org/zhaoping/VideoLectures_ByZhaoping.html... Read More →



Sunday July 19, 2020 3:00pm - 4:00pm CEST
Crowdcast
  Keynote
  • Moderator Steven Prescott; Anand Pathak; R. Janaki

4:20pm CEST

O4: Towards multipurpose bio-realistic models of cortical circuits
Neurostars discussion link

One of the central questions in neuroscience is how structure of brain circuits determines their activity and function. To explore such structure-function relations systematically, we integrate information from large-scale experimental surveys into data-driven, bio-realistic models of brain circuits, with the current focus on the mouse cortex.

Our 230,000-neuron models of the mouse cortical area V1 [1] were constructed at two levels of granularity – using either biophysically-detailed or point-neurons. These models systematically integrated a broad array of experimental data [1–3]: the information about distribution and morpho-electric properties of different neuron types in V1; connection probabilities, synaptic weights, axonal delays, and dendritic targeting rules inferred from a thorough survey of the literature; and a sophisticated representation of visual inputs into V1 from the Lateral Geniculate Nucleus, fit to in vivo recordings. The model activity has been tested against large-scale in vivo recordings of neural activity [4]. We found a good agreement between these experimental data and the V1 models for a variety of metrics, such as direction selectivity, as well as less good agreement for other metrics, suggesting avenues for future improvements. In the process of building and testing models, we also made predictions about the logic of recurrent connectivity with respect to functional properties of the neurons, some of which have been verified experimentally [1].

In this presentation, we will focus on the model’s successes in quantitative matching of multiple experimental measures, as well as failures in matching other metrics. Both successes and failures shed light on the potential structure-function relations in cortical circuits, leading to experimentally testable hypotheses. Our models are shared freely with the community: https://portal.brain-map.org/explore/models/mv1-all-layers. We also freely share our software tools – the Brain Modeling ToolKit (BMTK; alleninstitute.github.io/bmtk/), which is a software suite for model building/simulation [5], and the SONATA file format [6] (github.com/allenInstitute/sonata).

References
1. Billeh, Y. N. et al. Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex. Neuron 106, 388-403.e18 (2020).
2. Gouwens, N. W. et al. Classification of electrophysiological and morphological neuron types in the mouse visual cortex. Nat. Neurosci. 22, 1182–1195 (2019).
3. Gouwens, N. W. et al. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nat. Commun. 9, 710 (2018).
4. Siegle, J. H. et al. A survey of spiking activity reveals a functional hierarchy of mouse corticothalamic visual areas. bioRxiv 805010 (2019) doi:10.1101/805010.
5. Gratiy, S. L. et al. BioNet: A Python interface to NEURON for modeling large-scale networks. PLoS One 13, e0201630 (2018).
6. Dai, K. et al. The SONATA data format for efficient description of large-scale network models. PLOS Comput. Biol. 16, e1007696 (2020).

Speakers
avatar for Anton Arkhipov

Anton Arkhipov

Mindscope Program at the Allen Institute, Seattle, USA


Sunday July 19, 2020 4:20pm - 4:40pm CEST
Crowdcast
  Oral, Sensory Systems
  • Moderator Christoph Metzner; Soledad Gonzalo Cogno

4:40pm CEST

O5: How Stimulus Statistics Affect the Receptive Fields of Cells in Primary Visual Cortex
Ali Almasi, Hamish Meffin, Shi Sun, Michael R Ibbotson

Neurostars discussion link

Our understanding of sensory coding in the visual system is largely derived from parametrizing neuronal responses to basic stimuli. Recently, mathematical tools have developed that allow estimating the parameters of a receptive field (RF) model, which are typically a cascade of linear filters on the stimulus, followed by static nonlinearities that map the output of the filters to the neuronal spike rates. However, how much do these characterizations depend on the choice of the stimulus type?

We studied the changes that neuronal RF models undergo due to the change in the statistics of the visual stimulus. We applied the nonlinear input model (NIM) [1] to the recordings of single units in cat primary visual cortex (V1) in response to white Gaussian noise (WGN) and natural scenes (NS). These two stimulus types were matched in their global RMS contrast; however, they are fundamentally different in terms of second- and higher-order statistics, which are abundant in natural scenes but do not exist in white noise. We estimated for each cell the spatial filters constituting the neuronal RF and their corresponding nonlinear pooling mechanism, while making minimal assumptions about the underlying neuronal processing.

We found that cells respond differently to these two stimulus types, with mostly higher spike rates and shorter response latencies to NS than to WGN. The most striking finding was that NS stimuli resulted in around twice as many uncovered RF filters compared to using WGN stimuli. Via careful analysis of the data, we discovered that this difference between the number of identified RF filters is not related to the higher spike rates of cells to NS stimuli. Instead, we found it to be attributed to the difference in the contrast levels of specific features that exhibit different prevalence in NS versus WGN. These features correspond to the V1 RF filters recovered in the model. This specific feature-contrast attains much higher values in NS compared to WGN stimuli. When the feature-contrast is controlled for, it explains the differences in the number of RF filters obtained. Our findings imply that a greater extent of nonlinear processing in V1 neurons can be uncovered using natural scene stimulation.

Acknowledgements The authors acknowledge the support the Australian Research Council Centre of Excellence for Integrative Brain function (CE140100007), the National Health and Medical Research Council (GNT1106390), and Lions Club of Victoria.

References

[1] McFarland JM, Cui Y, Butts DA. Inferring nonlinear neuronal computation based on physiologically plausible inputs. PLoS Comput Biol. 2013, 9(7).

Speakers
avatar for Ali Almasi

Ali Almasi

Research Fellow, National Vision Research Institute, Melbourne


Sunday July 19, 2020 4:40pm - 5:00pm CEST
Crowdcast
  Oral, Sensory Systems
  • Moderator Christoph Metzner; Soledad Gonzalo Cogno

5:00pm CEST

O6: Analysis and Modelling of Response Features of Accessory Olfactory Bulb Neurons
Yoram Ben-Shaul, Rohini Bansal, Romana Stopkova, Maximilian Nagel, Pavel Stopka, Marc Spehr

Neurostars discussion link

The broad goal of this work is to understand how consistency on a macroscopic scale can be achieved despite random connectivity at the level of individual neurons.

A central aspect of any sensory system is the manner by which features of the external world are represented by neurons at various processing stages. Yet, it is not always clear what these features are, how they are represented, and how they emerge mechanistically. Here, we investigate this issue in the context of the vomeronasal system (VNS), a vertebrate chemosensory system specialized for processing of cues from other organisms. We focus on the accessory olfactory bulb AOB, which receives all vomeronasal sensory neuron inputs. Unlike the main olfactory system, where MTCs sample information from a single receptor type, AOB MTCs sample information from a variable number of glomeruli, in a manner that seems largely random. This apparently random connectivity is puzzling given the presumed role of this system in processing cues with innately relevant significance.

We use multisite extracellular recordings to measure the responses of mouse AOB MTCs to controlled presentation of natural urine stimuli from male and female mice from various strains, including from wild mice. Crucially, we also measured the levels of both volatile and peptide chemical components in the very same stimulus samples that were presented to the mice. As subjects, we used two genetically distinct mouse strains, allowing us to test if macroscopic similarity can emerge despite variability at the level of receptor expression.

First, we then explored neuronal receptive fields, and found that neurons selective for specific strains (regardless of sex), or a specific sex (regardless of strain), are less common than expected by chance. This is consistent with our previous findings indicating that high level stimulus features are represented in a distributed manner in the AOB. We then compared various aspects of neuronal responses across the two strains, and found a high degree of correlation among them, suggesting that despite apparent randomness and strain specific genetic aspects, consistent features emerge at the level of the AOB.

Next, we set out to model the responses of AOB neurons. Briefly, AOB responses to a given stimulus are modelled as dot products of random tuning profiles to specific chemicals and the actual level of those chemicals in the stimulus. In this manner we derive a population of AOB responses, which we can then compare to the measured responses. Our analysis thus far reveals several important insights. First, neuronal response properties are best accounted for by sampling of protein/peptide components, but not by volatile urinary components. This is consistent with the known physiology of the VNS. Second, several response features (population level neuronal distances, sparseness, distribution of receptive field types) are best reproduced in the model with random sampling of multiple, rather than single molecules per neuron. This suggests that the sampling mode of AOB neurons may mitigate some of the consequences of random sampling. Finally, we note that random sampling of molecules provides a reasonable fit for some, but not all metrics of the observed responses. Our ongoing work aims to identify which changes must be made to our initial simplistic model in order to account for these features.

This work is funded by GIF and DFG grants to Marc Spehr and Yoram Ben-Shaul

Speakers
avatar for Yoram Ben-Shaul

Yoram Ben-Shaul

Medical Neurobiology, The Hebrew University


Sunday July 19, 2020 5:00pm - 5:20pm CEST
Crowdcast
  Oral, Sensory Systems
  • Moderator Christoph Metzner; Soledad Gonzalo Cogno

5:40pm CEST

F2: Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus
Andrea Navas-Olive, Liset M de la Prida Instituto Cajal CSIC. Ave Doctor Arce 37. Madrid 28002.

Neurostars discussion link

 The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates.

The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown.

Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.

Figure


Acknowledgments:
Andrea Navas-Olive is supported by PhD Fellowship FPU17/03268.

References:

1. Cembrowski MS, Spruston N. Heterogeneity within classical cell types is the rule: lessons from hippocampal pyramidal neurons. Nat Rev Neurosci. 2019, 20(4):193-204 2. Chaudhuri R, Gerçek B, Pandey B, Peyrache A, Fiete I. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nat Neurosci. 2019, 22(9):1512-1520

3. Guo W, Zhang JJ, Newman JP, Wilson MA. Latent learning drives sleep-dependent plasticity in distinct CA1 subpopulations. bioRxiv. 2020, doi.org/10.1101/2020.02.27.967794

4. Bannister, NJ, and Larkman, AU. Dendritic morphology of CA1 pyramidal neurones from the rat hippocampus: I. Branching patterns. J. Comp. Neurol. 1995, 360, 150–160

5. Valero, M, Cid, E, Averkin, RG, Aguilar, J, Sanchez-Aguilera, A, Viney, TJ, Gomez-Dominguez, D, Bellistri, E, and De La Prida, LM. Determinants of different deep and superficial CA1 pyramidal cell dynamics during sharp-wave ripples. Nat. Neurosci. 2015,18

6. Navas-Olive A, Valero M, de Salas A, Jurado-Parras T, Averkin RG, Gambino G, Cid E, de la Prida LM. Multimodal determinants of phase-locked dynamics across deep-superficial hippocampal sublayers during theta oscillations. Nat Commun 11, 2217 (2020). https://doi.org/10.1038/s41467-020-15840-6

Speakers
avatar for Andrea Navas-Olive

Andrea Navas-Olive

PhD Student, Instituto Cajal CSIC
Realistic modelsNEURONGenetic AlgorithmMachine Learning


Sunday July 19, 2020 5:40pm - 6:20pm CEST
Crowdcast
  Featured Talk, Hippocampus
  • Moderator Jean-Marc Fellous; Soledad Gonzalo Cogno; Tom Burns

6:20pm CEST

O7: 'Awake Delta' and Theta-Rhythmic Modes of Hippocampal Network Activity Track Intermittent Locomotor Behaviors in Rat
Nathan Schultheiss, Tomas Guilarte, Tim Allen

Neurostars discussion link

Delta-frequency activity in the local field potential (LFP) is widely believed to correspond to so-called 'cortical silence' during phases of non-REM sleep, but delta in awake behaving animals is not well understood and is rarely studied in detail. By integrating novel analyses of the hippocampal (HC) LFP with simultaneous behavioral tracking, we show for the first time that HC synchronization in the delta frequency band (1-4 Hz) is related to animals' locomotor behaviors during free exploration and foraging in an open field environment. In contrast to well-established relationships between animals' running speeds and the theta rhythm (6-10 Hz), we found that delta was most prominent when animals were stationary or moving slowly (i.e. when theta and fast gamma (65-120 Hz) were weak). Furthermore, delta synchronization often developed rapidly when animals paused briefly between intermittent running bouts.

Next, we developed an innovative strategy for identifying putative _modes_ of network function based on the spectral content of the LFP. By applying hierarchical clustering algorithms to time-windowed power spectra throughout behavioral sessions (i.e. the spectrogram), we categorized moment-by-moment estimations of the power spectral density (PSD) into spectral modes of HC activity. That is, we operationalized putative _functional modes_ of network computation as _spectral modes_ of LFP activity. Delta and theta power were strikingly orthogonal across the resultant spectral modes, suggesting the possibility that delta- and theta-dominated hippocampal activity patterns represent distinct modes of HC function during navigation. Delta and theta were also remarkably orthogonal across precisely-defined bouts of running and stationary behavior, indicating that the stops-and-starts that compose rats' locomotor trajectories are accompanied by alternating delta- and theta-dominated HC states.

We then asked whether the incidence of delta and theta modes was related to the coherence between recording sites in hippocampus or between hippocampus and medial prefrontal cortex (mPFC). We found that intrahippocampal coherences in both the delta-band and the theta-band were monotonically related to theta-delta ratios across modes. Furthermore, in two rats implanted with dual-site recording arrays, we found that theta coherence between HC and mPFC increased during running, and delta-band coherence between mPFC and HC increased during stationary bouts. Taken together, our findings suggest that delta-dominated network modes (and corresponding mPFC-HC couplings) represent functionally-distinct circuit dynamics that are temporally and behaviorally interspersed among theta-dominated modes during spatial navigation. As such, delta modes could play a fundamental role in coordinating mnemonic functions including encoding and retrieval mechanisms, or decision-making processes incorporating prospective or retrospective representations of experience, at a timescale that segments event sequences within behavioral episodes.

Speakers
avatar for Nathan Schultheiss

Nathan Schultheiss

Research Scientist, Psychology, Florida International University


Sunday July 19, 2020 6:20pm - 6:40pm CEST
Crowdcast
  Oral, Hippocampus
  • Moderator Jean-Marc Fellous; Soledad Gonzalo Cogno; Tom Burns

7:00pm CEST

P114: Effect of Diverse Recoding of Granule Cells on Delay Eyeblink Conditioning in A Cerebellar Network
Sang-Yoon Kim, Woochang Lim
Virtual Room: https://meet.google.com/eva-nakc-eba
Teaser: https://youtu.be/ETTcAdO_87c

We consider a ring network for the delay eyeblink conditioning, and investigate the effect of diverse firing activities of granule (GR) cells on the eyeblink conditioning under conditioned stimulus (tone) by varying the connection probability $p_c$ from Golgi to GR cells. For an optimal value of $p^*_c$, individual GR cells exhibit diverse spiking patterns which are well- or poor-matched with the unconditioned stimulus (airpuff). Then, these diversely-recoded signals via parallel-fibers (PFs) from GR cells are effectively depressed by the error teaching signals via climbing fibers (CFs) from the inferior olive. Synaptic weights at well-matched PF–Purkinje cell (PC) synapses of active GR cells are strongly depressed via strong long-term depression (LTD), while no LTD occurs at poor-matched PF–PC synapses. This kind of “effective” depression at PF-PC synapses coordinates firings of PCs effectively, which then exert effective inhibitory coordination on cerebellar nucleus (CN) [which evokes conditioned response (CR; eyeblink)]. When the learning trial passes a threshold, CR occurs. In this case, the timing degree $T_d$ becomes good due to presence of poor-matched spiking group which plays a role of protection barrier for the timing. With further increase in trials, strength of CR $S_CR$ increases due to strong LTD in the well-matched spiking group, while its timing degree decreases. Thus, the overall efficiency degree $L_e$ (taking into consideration both timing and strength of CR) for the eyeblink increases with trials, and eventually saturates. By changing $p_c$, we also investigate the delay eyeblink conditioning and find that a plot of $L_e$ versus $p_c$ forms a bell-shaped curve with a peak at $p^*_c$ (where the diversity degree $D$ in firing of GR cells is also maximum). The more diverse in spiking patterns of GR cells, the more effective in CR for the eyeblink.

Speakers
WL

Woochang Lim

Institute for Computational Neuroscience and Department of Science Education, Daegu National University of Education



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 02

7:00pm CEST

P115: Approximating Information Filtering of a Two-stage Neural System
Google Meet link: meet.google.com/zmy-cdqa-gea
 
If you miss the presentation or have further questions, I would be more than happy to be contacted at: gregory.knoll[at]bccn-berlin[dot]de

Download the poster
   
Read the paper in Biological Cybernetics

Gregory Knoll, Žiga Bostner, Benjamin Lindner 

Information streams are processed in the brain by populations of neurons tuned to perform specific computations, the results of which are forwarded to subsequent processing stages. Building on theoretical results for the behavior of single neurons and populations, we investigate the extent to which a postsynaptic cell (PSC) can detect the information present in the output stream of a population which has encoded a signal. In this two-stage system, the population is a simple feedforward network of integrate-and-fire neurons which integrate and relay the signal, reminiscent of auditory or electroreceptor afferents in the sensory periphery. Depending on the application, the information relevant for the PSC may be contained in a specific frequency band of the stimulus, requiring the PSC to properly tune its information encoding to that band (information filtering). In the specific setup studied here, information filtering is associated with detecting synchronous activity. It was found that synchronous activity of a neural population selectively encodes information about high-frequency bands of a broadband stimulus, and it was hypothesized that this information can be read out by coincidence detector cells that are activated only by synchronous input. Firstly, we test this hypothesis and match the key characteristics of information filtering, the spectral coherence function, of the PSC and the stimulus and of the time-dependent synchrony in the population output and the stimulus; we show that the relations between the synchrony and PSC thresholds and between the synchrony window and PSC time constant are roughly linear, which implies that the synchronous output of the population can be taken as a proxy for the postsynaptic coincidence detector and, conversely, that the PSC can be made to detect synchrony (or coincidence) by adjusting its time constant and threshold. Secondly, we develop an analytical approximation for the coherence function of the PSC and the stimulus and demonstrate its accuracy by comparison against numerical simulations, both in the fluctuation-dominated and mean-driven regimes of the PSC.

Speakers
avatar for Gregory Knoll

Gregory Knoll

Physics, Humboldt-Universitaet zu Berlin
B.S., Biopsychology, UC Santa BarbaraB.E., Computer Engineering, City University of New YorkM.S., Computational Neuroscience, BCCN BerlinCurrently pursuing a doctorate in the lab of Professor Benjamin Lindner at Humboldt Universität zu Berlin



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 08

7:00pm CEST

P130: Unifying information theory and machine learning in a model of cochlear implant electrode discrimination
Currently having an internet disruption at home, if you have any question please send me an email xiao.gao@unimelb.edu.au, I am more than happy to answer your questions. 

Xiao Gao
, David Grayden, Mark McDonnell

Despite the success of cochlear implants (CIs) over more than three decades, wide inter-subject variability in speech perception is reported [1]. The key factors that cause variability between users are unclear. We previously developed an information theoretic modelling framework that enables estimation of the optimal number of electrodes and quantification of electrode discrimination ability [2, 3]. However, the optimal number of electrodes was estimated based only on statistical correlations between channel outputs and inputs, and the model did not quantitatively model psychophysical measurements and study inter-subject variability.

Here, we unified information theoretic and machine learning techniques to investigate the key factors that may limit the performance of CIs. The framework used a neural network classifier to predict which electrode was stimulated for a given simulated activation pattern of the auditory nerve, and mutual information was then estimated between the actual stimulated electrode and the predicted one.

Using the framework, electrode discrimination was quantified with a range of parameter choices, as shown in Fig. 1. The columns from left to right show how the distance between electrodes and auditory nerve fibres, _r_ , the number of surviving fibres, _N_ , the maximum current level (modelled as the percentage of surviving fibres, _N_ , that generate action potentials for a given stimulated electrode), and the attenuation in electrode current, _A_ , affect the model performance, respectively. The parameters were chosen to reflect the key factors that are believed to limit the performance of CIs. The model shows sensitivity to parameter choices, where smaller _r_ , larger _N_ , __ and higher attenuation in current lead to higher mutual information and improved classification.

This approach provides a flexible framework that may be used to investigate the key factors that limit the performance of cochlear implants. We aim to investigate its application to personalised configurations of CIs.

Acknowledgments

This work is supported by a McKenzie Fellowship, The University of Melbourne.

References

[1] Holden LK, Finley CC et al. “Affecting Open-Set word recognition in adults with cochlear implants”, Ear Hear, Vol. 34, 342-360, 2013

[2] Gao X, Grayden DB, McDonnell MD, “Stochastic information transfer from cochlear implant electrodes to auditory nerve fibers”, _Physical Review E_ 90 (2014) 022722.

[3] Gao X, Grayden DB, McDonnell MD, “Modeling electrode place discrimination in cochlear implant stimulation _”, IEEE Transactions on Biomedical Engineering_ 64 (2017) 2219–2229.

Speakers
avatar for (Demi) Xiao Gao

(Demi) Xiao Gao

McKenzie Research Fellow, Department of Biomedical Engineering, University of Melbourne
Demi Xiao Gao is a Mckenzie research fellow in the Department of Biomedical Engineering with an honorary appointment in the School of Physics, University of Sydney. Demi received her Bachelor degree in Computer Science and Master in Biology, and completed her Ph.D in Information Technology... Read More →



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 11

7:00pm CEST

P174: Modal-Polar Representation of Evoked Response Potentials

 https://uni-sydney.zoom.us/j/96850360307

Rawan El-Zghir, Natasha Gabay, Peter Robinson

Event related potentials (ERPs) have grabbed the attention of neuroscientists as significant voltage fluctuations of the brain after a visual, auditory, or sensory stimulation to the nervous system. ERPs are key elements for investigating cognitive features and signal processing of the brain. To predict ERPs corresponding to distinct arousal states, we use a corticothalamic neural field theory which contains physiologically based parameters corresponding to different physical quantities. Within this framework, ERPs depend on transcendental equations which are not analytically tractable. We approximate the temporal transfer function in terms of poles or resonances to derive formulas for the ERP which greatly simplify their analytic forms. The dominant resonances of the system correspond to slow frequency, alpha, and beta frequencies. Our calculations are based on contour integration via the Cauchy-residue theorem that allows us to find explicit expressions for the ERP in terms of real and imaginary parts of the residues and poles. For each arousal state, we isolate the different resonances of the system and find that the wake eyes-closed state is distinguished by a more prominent alpha resonance compared to the eyes open wake state, as expected. We found that 5 poles are sufficient to study the main dynamics of the system in the awake eyes-close case (with around 4 % accuracy at the alpha peak) and for the awake eyes-open case (with around 3 % accuracy at the alpha peak). Similarly, we found that 6 poles are sufficient to reproduce ERPs corresponding to REM, S1, and S2 sleep stages, whereas only 4 poles are sufficient to study the dynamics of deepest sleep stage (slow wave sleep). This framework provides a physiologically-based tool which predicts ERPs corresponding to a given transfer function.

Speakers
avatar for Rawan El-Zghir

Rawan El-Zghir

The University of Sydney



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 14

7:00pm CEST

P175: Analytic Model for Feature Maps in the Primary Visual Cortex
Xiaochen Liu, Peter Robinson

This study proposes a compact analytic model that describes the orientation preference (OP) and ocular dominance (OD) maps of the primary visual cortex (V1) in hypercolumns, within which OP and OD are arranged as pinwheels and stripes. This model consists of two parts: (i) an OP operator, which uses a linear combination of weighted partial derivatives to incorporate the small- scale local neuron sensitivity to the preferred orientation of the visual inputs; and (ii) a receptive field (RF) operator, which models the spatial RF structure of V1 simple cell, and it is derived from finding the neural activities at arbitrary location with a directional anisotropic modulation of projections from neighboring neurons at scales of a few tenths of a millimetre. The parameters of the proposed OP-OD map model are tuned to maximize the neural response at the desired OP, by matching the width of OP tuning curves with experimental results. Moreover, we find that the weights of the partial derivatives in OP operator do not significantly affect the OP selectivity of the neuron, whereas the overall envelope of the RF operator does. This agrees with Hubel and Wiesel’s prediction [1], that orientation tuning width of V1 simple cell is related to the elongation of its RF.

The simplified OP-OD map is used to provide inputs to neural field theory (NFT) analysis of the approximate periodic OP-OD structure of V1. This is done by decomposing the OP-OD map representation in Fourier domain to generate a sparse set of Fourier coefficients. In addition, only the least number of coefficients, which are enough to preserves the basic spatial arrangement of OP-OD map, are passed to NFT for investigating OP map related neural activities. The decomposition is also applied on more realistic OP maps generated from published models and its properties are discussed.

1. Hubel, D.H. and Wiesel, T.N. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol. 1962,160(1): 106-154.

Speakers
XL

Xiaochen Liu

SCHOOL OF PHYSICS, University of Sydney, Australia



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 17

7:00pm CEST

P18: Inhibitory Gating in The Dentate Gyrus
Claudio Mirasso, Cristian Estarellas, Santiago Canals
Google meets:
 https://meet.google.com/mjk-usvk-hin

Electrophysiological recordings have demonstrated a tight inhibitory control of hilar interneurons over Dentate Gyrus granule cells (DGgc) (Bragin et al. 1995; Permía-Andrade et al. 2014). This excitation/inhibition balance is crucial for information transmission (Bartos et al., 2001) and likely relies on inhibitory synaptic plasticity (Vogels et al., 2011). Our experiments show that LTP induction in the Perforant Pathway (PP), not only potentiates glutamatergic synapses but unexpectedly decreases feed-forward inhibition in the DG, facilitating activity propagation in the circuit and modifying the long-range connectivity in the brain. To investigate this phenomenon, we propose to study a circuit of populations of point neurons described by the Izhikevich model. The model contains entorhinal cortex (EC) neurons, DGgc, mossy cells, basket cells and hilar interneurons. The proportion of neurons per population and the connectivity of the neural network is based on anatomical published data and is fitted to achieve experimental electrophysiological in vivo recordings (Permía-Andrade et al. 2014). The study of the effect of LTP in the local circuit of the DG is performed in the model adapting synaptic weights in the EC projections. The results obtained from the model, before and after LTP induction, support the counterintuitive experimental observation of synaptic depression in the feed-forward inhibitory connection induced by LTP. We show that LTP increases the efficiency of the glutamatergic input to recruit the inhibitory network, resulting in a reciprocal cancellation of the basket cell population activity. We validate the result of the model by electrophysiological experiments inducing LTP in the PP of anaesthetized mice _in vivo_ and recording excitatory and inhibitory currents in vitro in the same animals. Overall, our findings suggest that LTP of the EC input increases the excitation/inhibition balance, and facilitates activity propagation to the next station in the circuit by recruiting an interneuron-interneuron network that inhibits the tight control of basket cells over DGgc firing.

References

Bragin A, Jandó G, Nádasdy Z, Hetke J, Wise K, Buzsáki G. Gamma (40-100 Hz) oscillation in the hippocampus of the behaving rat. J Neurosci. 1995, 15(1 Pt 1):47-60.

Pernía-Andrade A.J, Jonas P. Theta-gamma-modulated synaptic currents in hippocampal granule cells in vivo define a mechanism for network oscillations. Neuron. 2014, 81(1): 140–152.

Bartos M, Vida I, Frotscher M, Jörg G, Jonas P.Rapid Signaling at Inhibitory Synapses in a Dentate Gyrus Interneuron Network. J. Neurosci. 2001, 21 (8) 2687-2698.

Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011, 334(6062):1569-73

Speakers
avatar for Cristian Estarellas

Cristian Estarellas

PhD Studend, Instituto de Física Interdisciplinar y Sistemas Complejoss



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 11

7:00pm CEST

P192: Estimating Transfer Entropy in Continuous Time For Spike Trains
Google Meet Link

Alternate Link


David Shorten
, Joseph Lizier, Richard Spinney

Transfer entropy (TE) [1] is a measure of the flow of information between components in a system. It is defined as the mutual information between the past of a source and the present state of a target, conditioned on the past of the target. It has received widespread application in neuroscience [2], both for characterising information flows as well as inferring effective connectivity from data sources such as MEG, EEG, fMRI, calcium imaging and electrode arrays. Previous applications of TE to spike trains have relied on time discretisation, where the spike train is divided into time bins and the TE is estimated from the numbers of spikes occurring in each bin. There are, however, several disadvantages to estimating TE from time-discretised data [3]. Firstly, as time discretisation is a lossy transformation of the data, any estimator based on time discretisation is not consistent (It will not converge to the true value of the TE in the limit of infinite data). Secondly, whilst the loss of resolution of the discretisation will decrease with decreasing bin size, this requires larger dimensionality of the history embeddings to capture correlations over similar time intervals. This results in an exponential increase in the state space size being sampled and therefore the data requirements.

Recently, a continuous-time framework [3] for transfer entropy was developed. This framework has a distinct advantage in that it demonstrates that, for spike trains, the TE can be calculated solely from contributions occurring at spikes. This presentation reports on a newly developed continuous-time estimator for transfer entropy for spike trains which utilises this framework. Importantly, this new estimator is a consistent estimator of the TE. As it does not require time discretisation, it calculates the TE based on the raw interspike interval timings of the source and target neurons. Similar to the popular KSG estimator [4] for mutual information and TE, it performs estimation using the statistics of K-nearest-neighbour searches in the target and source history spaces. Tests on synthetic datasets of coupled and uncoupled point processes have confirmed that the estimator is consistent and has low bias. Similar tests of the time-discretised estimator have found it to not be consistent and have larger bias. The efficacy of the estimator is further demonstrated on the task of inferring the connectivity of biophyiscal models of the pyloric network of the crustacean stomatogastric ganglion. Granger causality (which is equivalent to TE under the assumption of Gaussian variables) has been shown to be incapable of inferring this particular network [5], although it was demonstrated that it could be inferred by a generalised linear model.

References

1. Schreiber T. Measuring information transfer. Physical review letters. 2000, 85(2), 461

2. Wibral M, Vicente R, Lizier JT, editors. Directed information measures in neuroscience. Berlin: Springer; 2014

3. Spinney RE, Prokopenko M, Lizier JT. Transfer entropy in continuous time, with applications to jump and neural spiking processes. Physical Review E. 2017, 95(3), 032319

4. Kraskov A, Stögbauer H, Grassberger P. Estimating mutual information. Physical review E. 2004, 69(6), 066138.

5. Kispersky T, Gutierrez GJ, Marder E. Functional connectivity in a rhythmic inhibitory circuit using Granger causality. Neural systems & circuits. 2011, 1(1), 9

Speakers
DS

David Shorten

PhD Student, Complex Systems Research Group, University of Sydney
avatar for Joseph Lizier

Joseph Lizier

Associate Professor, Centre for Complex Systems, The University of Sydney
My research focusses on studying the dynamics of information processing in biological and bio-inspired complex systems and networks, using tools from information theory such as transfer entropy to reveal when and where in a complex system information is being stored, transferred and... Read More →


poster pdf

Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 13

7:00pm CEST

P193: Input strength dependence of the beta component of gamma-band auditory steady-state responses in patients with schizophrenia
Link to the Zoom session

If you missed the session and still want to talk or ask questions, email me at cmetzner[at]ni.tu-berlin.de

Christoph Metzner
, Volker Steuber

The mechanisms underlying circuit dysfunctions in schizophrenia (SCZ) remain poorly understood. Auditory steady-state response (ASSRs), especially in the gamma and beta band, have been suggested as a potential biomarker for SCZ. While the reduction of 40Hz power for 40Hz drive has been well established and replicated in SCZ patients, studies are inconclusive when it comes to an increase in 20Hz power during 40Hz drive [1]. There might be several factors explaining the inconsistencies, including differences in the sensitivity of the recording modality (EEG vs MEG), differences in stimuli (click-trains vs amplitude-modulated tones) and also large differences in the amplitude of the stimuli.

Here, we used a computational model of ASSR deficits in SCZ [2,3,4], in which increased IPSC decay times at GABAergic synapses produce ASSR deficits as seen experimentally. We investigated the effect of input strength on gamma and beta band power during gamma ASSR stimulation. We found that the pronounced increase in beta power during gamma stimulation seen experimentally could only be reproduced in the model for a specific range of input strengths. More specifically, if the input was too weak the network failed to produce a strong oscillatory rhythm. When the input was in the specific range, the rhythmic drive at 40Hz produced a strong 40Hz rhythm in the control network, however, in the ‘SCZ-like’ network, the prolonged inhibition led to a so-called ‘beat- skipping’, where the network would only strongly respond to every other input. This mechanism was responsible for the emergence of the pronounced 20Hz beta peak in the power spectrum. However, if the input exceeded a certain strength value, the 20Hz peak in the power spectrum disappeared again. In this case, prolonged inhibition due to the increased IPSC times was insufficient to suppress the now stronger gamma drive from the input, resulting in an absence of the beat-skipping and single peak at 40Hz in the power spectrum.

Here, we employed an established model of gamma and beta band ASSR deficits in SCZ to explore the dependence of a beta component in response to gamma drive on the strength of the input. Our finding that the beta component only existed for a specific range of input strengths might explain the seemingly inconsistent reporting in experimental studies and suggests that future ASSR studies should explicitly explore different amplitudes of their stimuli.

References

1\. Thune H, Recasens M, Uhlhaas PJ. The 40-Hz auditory steady-state response in patients with schizophrenia: a meta-analysis. JAMA Psychiatry 2016, 73(11).

2\. Vierling-Claassen D, Siekmeier P, Stufflebeam S, & Kopell N. Modeling GABA alterations in schizophrenia: a link between impaired inhibition and altered gamma and beta range auditory entrainment. J Neurophysiol 2008, 99(5).

3\. Metzner C. [Re] Modeling GABA alterations in schizophrenia: a link between impaired inhibition and gamma and beta auditory entrainment. ReScience 3(1).

4\. Metzner C, Zurowski B, & Steuber V. The role of parvalbumin-positive interneurons in auditory steady-state response deficits in schizophrenia. Sci Rep (2019), 9(1).

Speakers
avatar for Christoph Metzner

Christoph Metzner

PostDoc, Department of Software Engineering and Theoretical Computer Science, Technische Universit√§t Berlin



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 10

7:00pm CEST

P200: Computational modeling of the input/output mapping in the cerebellar cortex
Zoom meeting link

Akshay Markanday
, Sungho Hong, Junya Inoue, Peter Dicke, Erik De Schutter, Peter Thier

The cerebellar cortex is a brain region deeply involved in sensorimotor coordination and adaptation. It receives external inputs via axons called the mossy fibers (MF), delivering diverse information, including sensory- and motor signals, from other brain regions. Then, the output neurons, Purkinje cells (PC), transmit the result of computation by the network. Many studies have elucidated how different stages of computation in this neural circuit represents sensory and motor information. However, circuit-level information processing has not been well-understood.

Here we investigated this question by characterizing how MF firings transform into PC outputs in recording data from those cells (n=110 and 135, respectively) in rhesus monkeys that were performing a sensorimotor task (M. Mulatta; n=2). We trained the animals for a saccadic eye movement task, where they followed a target jumping back and forth between two horizontal target locations. The fast pace and repetitive nature of the task led to a gradual decline in saccade velocities (fatigue).

We found that the firing rates of MFs linearly encoded eye speed and saccade duration, consistent with previous studies (e.g. [1]). Using the linear rate coding property of MFs and also PCs, we constructed the rate coding models of individual cells from the data and formed the virtual populations of those models for each cell type. This method enabled us to analyze eye speed- dependent variability of the population responses beyond the firing rate across trials.

By using the virtual population of MFs and PCs, we found that the activities of MFs and PCs can be both characterized by low dimensional “manifolds” [2] that resemble the limit cycles. Here, the PC manifold is higher-dimensional as compared to that of MFs and has more complex representations of variability in eye movements. Nonetheless, there exists a linear transformation between the two populations [3], which can accurately predict the average and also velocity-dependent variability in the firing rate of individual neurons.

Based on these results, we suggest that the MFs deliver a compressed, low dimensional copy of sensorimotor information from other brain areas, possibly via convergence [3], and the cerebellar cortical circuit decompresses/transforms it to higher dimensional outputs, carrying the reorganized representation of the behavioral variability.

References

1\. Ohtsuka K, Noda H. Burst discharges of mossy fibers in the oculomotor vermis of macaque monkeys during saccadic eye movements. Neurosci Res. 1992, 15, 102–114.

2\. Gallego JA, Perich MG, Miller LE, et al. Neural manifolds for the control of movement. Neuron. 2017, 94, 978–984.

3\. Tanaka H, Ishikawa T, Kakei S. Neural evidence of the cerebellum as a state predictor. Cerebellum. 2019, 18, 349–371.

Speakers
avatar for Sungho Hong

Sungho Hong

Computational Neuroscience Unit, Okinawa Institute of Science and Technology



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 19

7:00pm CEST

P209: A Predictive Model of Serotonergic Fiber Densities Based on Reflected Fractional Brownian Motion
Skirmantas Janusonis, Ralf Metzler, Thomas Vojta

All vertebrate brains contain a dense matrix of thin axons (fibers) that release serotonin (5-hydroxytryptamine), a neurotransmitter that modulates a wide range of neural, glial, and vascular processes. Altered serotonergic fiber densities have been associated with a number of mental disorders and conditions, such as Autism Spectrum Disorder, Major Depressive Disorder, and exposure to 3,4-methylenedioxymethamphetamine (MDMA, "Ecstasy"). Also, serotonergic fibers can regrow in adulthood and therefore can support the functional recovery of the brain after injury. However, the processes that lead to the self-organization and plasticity of this fiber system remain poorly understood.

Our previous research has shown that the trajectories of serotonergic fibers in terminal fields can be modeled as random walks [1, 2]. We now introduce a computational model that is based on Fractional Brownian Motion (FBM), a continuous stochastic process that generalizes normal Brownian Motion and allows correlations between non-overlapping increments. The model capitalizes on the recently discovered properties of the reflected FBM (rFBM) in one- dimensional domains [3, 4].

FBM is parametrized by the Hurst index ( _H_ ) that allows subdiffusion ( _H_ < ½) and superdiffusion ( _H_ > ½). We show that in the superdiffusive regime rFBM-walks recapitulate some key features of regional serotonergic fiber densities, on the whole-brain scale. Specifically, by using supercomputing simulations of fibers as FBM-paths in two-dimensional brain-like domains, we demonstrate that the resultant steady-state distributions approximate the fiber distributions in mouse brain sections immunostained for the serotonin transporter (a marker for serotonergic fibers in the adult brain). These results do not sensitively depend on the _H_ -value (for _H_ > ½), precise estimates of which are currently difficult to obtain experimentally.

This novel framework can support predictive descriptions and manipulations of the serotonergic matrix and it can be further extended to incorporate the detailed physical properties of the fibers and their environment. We also show that this neuroscience-motivated approach can stimulate theoretical investigations of rFBM in two- and three-dimensional domains, with potential applications in other fields of science.

Acknowledgements This research is funded by the National Science Foundation (grants #1822517 and #1921515 to SJ), the National Institute of Mental Health (grant #MH117488 to SJ), the California NanoSystems Institute (Challenge grants to SJ), the Research Corporation for Science Advancement (a Cottrell SEED Award to TV), and the German Research Foundation (DFG grant #ME 1535/7-1 to RM), and the Foundation of Polish Science (an Alexander von Humboldt Polish Honorary Research Scholarship to RM).

References [1] Janušonis S., Detering N. A stochastic approach to serotonergic fibers in mental disorders. Biochimie. 2019, 161, 15-22. [2] Janušonis S., Mays K.C., Hingorani M.T. Serotonergic fibers as 3D-walks. ACS Chem Neurosci. 2019, 10, 3064-3067. [3] Wada A.H.O., Vojta, T. Fractional Brownian motion with a reflecting wall. Phys. Rev. E. 2018, 97, 020102. [4] Guggenberger T., Pagnini G., Vojta T., Metzler R. Fractional Brownian motion in a finite interval: correlations effect depletion or accretion zones of particles near boundaries. New J. Phys. 2019, 21, 022002.

Speakers
avatar for Skirmantas Janusonis

Skirmantas Janusonis

Associate Professor, University of California, Santa Barbara
My laboratory investigates the self-organization of the brain serotonergic matrix. Our research is inherently interdisciplinary and spans molecular neurobiology, advanced microscopy (including live imaging with holotomography, super-resolution microscopy), midbrain neuronal cell cultures... Read More →


Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 16

7:00pm CEST

P210: Seizure pathways change on circadian and slower timescales in individual patients with focal epilepsy
Zoom meeting link

Yujiang Wang


Abstract

Personalised medicine requires that treatments adapt to not only the patient, but changing factors within each individual. Although epilepsy is a dynamic disorder characterised by pathological fluctuations in brain state, surprisingly little is known about whether and how seizures vary in the same patient. We quantitatively compared within-patient seizure network evolutions using intracranial electroencephalographic (iEEG) recordings of over 500 seizures from 31 patients with focal epilepsy (mean 16.5 seizures/patient). In all patients, we found variability in seizure paths through the space of possible network dynamics. Seizures with similar pathways tended to occur closer together in time (Fig. 1), and a simple model suggested that seizure pathways change on circadian and/or slower timescales in the majority of patients. These temporal relationships occurred independent of whether the patient underwent antiepileptic medication reduction. Our results suggest that various modulatory processes, operating at different timescales, shape within-patient seizure evolutions, leading to variable seizure pathways that may require tailored treatment approaches.

Reference

Schroeder GM, Diehl B, Chowdhury FA, Duncan JS, de Tisi J, Trevelyan AJ, Forsyth R, Jackson A, Taylor PN, Wang Y. Seizure pathways change on circadian and slower timescales in individual patients with focal epilepsy. Proc Natl Acad Sci U S A. 2020;117(20):11048-11058. doi:10.1073/pnas.1922084117

Speakers
avatar for Yujiang Wang

Yujiang Wang

Principal Investigator, Newcastle University



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 05

7:00pm CEST

P212: Neuron conduction delay plasticity for unsupervised learning
Joshua Arnold, Peter Stratton, Janet Wiles

Zoom meeting id: 98578299540          Meeting link: https://uqz.zoom.us/j/98578299540

Spiking neurons inherently represent time due to their momentary discrete action potentials; as such, they are well poised to process spatiotemporal data. Despite their temporal nature, most computational learning rules focus on modulating synaptic efficacy (weight), which only indirectly influences a neuron's temporal dynamics. Weight-based rules are well suited to solving synchronous spatial learning tasks, as demonstrated by the surge of interest in rate-coded neurons performing frame-based image classification using backpropagation. For temporal tasks, however, weight based learning rules often implicitly rely on the temporal dynamics of membrane equations or synaptic transfer functions to discriminate between spatially identical, but temporally distinct, inputs. Allowing spiking neurons to perform some aspect of explicit temporal learning offers significant advantages for learning asynchronous spatiotemporal patterns compared to weight-based rules alone. With improvements in imaging techniques, there is accumulating evidence for action-potential conduction velocity plasticity over long and short timescales [1, 2]. The biological mechanisms implementing Conduction Delay Plasticity (CDP) could include myelination, changes in axon diameter, changes to nodes of Ranvier length, bouton movement, or likely some combination of these mechanisms and others not listed. While the precise nature and interaction of the biological mechanisms underlying CDP remain elusive, computational models provide a framework in which theories can be tested. Several CDP learning rules have been suggested with greatly varying levels of biological fidelity and computational efficiency; in particular, we focus on one rule called Synaptic Delay Variance Learning [3]. Here we demonstrate the ability of a Leaky Integrate and Fire spiking model using only CDP (no weight learning) to learn a repeating spatiotemporal pattern in a continuous time input stream with no training signal; that is, the delays self-organise to represent the temporal structure of the input. A neuron receives 2000 afferents firing with Poisson distributions of 10Hz, while the embedded pattern is presented with a Poisson distribution of 5Hz and consists of 500 afferents firing once within a 50ms period. The input is normalised such that the patterns cause no change in overall activity during presentations and all afferents involved in the pattern are adjusted to maintain a 10Hz firing rate. After 250 seconds of training, the neuron is tested for 50 seconds and successfully responds to 99.7% of pattern presentations with 3.1% false positives, averaged over 100 trials. These results provide a demonstration of CDP as a functional computational learning rule enabling spiking neurons to perform unsupervised learning of spatiotemporal data.

[1] Fields RD. A new mechanism of nervous system plasticity: activity- dependent myelination. Nat Rev Neurosci. 2015 Dec;16(12):756-67.

[2] Arancibia-Carcamo IL, Ford MC, Cossell L, Ishida K, Tohyama K, Attwell D. Node of Ranvier length as a potential regulator of myelinated axon conduction speed. Elife. 2017 Jan 28;6:e23329.

[3] Wright PW, Wiles J. Learning transmission delays in spiking neural networks: A novel approach to sequence learning based on spike delay variance. In The 2012 International Joint Conference on Neural Networks (IJCNN) 2012 Jun 10 (pp. 1-8). IEEE.

Speakers
avatar for Joshua T Arnold

Joshua T Arnold

PhD Student, ITEE, University of Queensland
Lets talk about computational learning rules, adaptive and plastic systems, and neuronal delays. What's wrong and right with STDP? Which learning mechanisms interact well or poorly with each other? Interested in the role of conduction delays for learning or learning mechanisms generally... Read More →



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 12

7:00pm CEST

P223: Less is More: Wiring-Economical Modular Networks Support Self-Sustained Firing-Economical Neural Avalanches for Efficient Processing

Join Zoom Meeting
https://hkbu.zoom.us/j/8259801008
Meeting ID: 825 980 1008

Shengjun Wang
, Junhao Liang, Changsong Zhou

Complex neural network in the brain is remarkably cost-efficient while the basic mechanisms underlying its structure-dynamics economy are not clear. Here we study the intricate interplay between wiring and running cost with modular network topology, self-sustained activity and critical avalanche dynamical mode in biologically plausible excitation-inhibition balanced spatial neuronal network. When rewiring the initially wiring-expensive sparse random network gradually to wiring-economical modular network, its self-sustained dynamics changes from asynchronous spiking to critical avalanches state with strongly reduced firing rate and greatly enhanced response sensitivity to transient stimuli. Thus, the system can counter intuitively achieve much more functional values with much less costs in both wiring and firing. The dynamic mechanism is explained as a proximity to Hopf bifurcation in the macroscopic mean-field in separated modules when increasing the connection density. Our work reveals the generic mechanism underlying the cost-economical structural organization and function-efficient critical dynamics of neural systems, providing insights to brain-inspired efficient computational designs.

This work is supposed to be presented by Changsong Zhou.




Speakers
avatar for Changsong Zhou

Changsong Zhou

Professor, Physics, Hong Kong Baptist University
Dr. Changsong Zhou, Professor, Department of Physics, Director of Centre for Nonlinear Studies, Hong Kong Baptist University (HKBU). Dr. Zhou’s research interest is dynamical processes on complex systems. His current emphasis is on analysis and modeling connectivity and activity... Read More →



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 05

7:00pm CEST

P224: Hopf Bifurcation in Mean Field Explains Critical Avalanches in Excitation-Inhibition Balanced Neuronal Networks: A Mechanism for Multiscale Variabilit
Join Zoom Meeting
https://hkbu.zoom.us/j/8259801008
Meeting ID: 825 980 1008



Junhao Liang
, Tianshou Zhou, Changsong Zhou

Cortical neural circuits display highly irregular spiking in individual neurons but variably sized collective firing, oscillations and critical avalanches at the population level, all of which have functional importance for information processing. Theoretically, the balance of excitation and inhibition inputs is thought to account for spiking irregularity and critical avalanches may originate from an underlying phase transition. However, the theoretical reconciliation of these multilevel dynamic aspects remains an open question. Herein, we show that excitation-inhibition (E-I) balanced network with synaptic kinetics can maintain irregular spiking dynamics with different levels of synchrony and critical avalanches emerge near the synchronous transition point. The mechanism is unveiled by a novel mean-field theory that derives the field equations governing the network macroscopic dynamics. It reveals that the E-I balanced state of the network manifesting irregular individual spiking is characterized by a macroscopic stable state, which can be either a fixed point or a periodic motion and the transition is predicted by a Hopf bifurcation in the macroscopic field. Furthermore, these multiscale variable behaviours can be jointly observed in the spontaneous activities of mouse cortical slice _in vitro_ , indicating universality of the theoretical prediction. Our theory unveils the mechanism that permits complex neural activities in different spatiotemporal scales to coexist and elucidates a possible origin of the criticality of neural systems. It also provides a theoretical framework for analyzing the macroscopic dynamics of E-I balanced networks and its relationship to the microscopic counterparts, which can be useful for large-scale modeling and computation of cortical dynamics.

** **

This work is supposed to be presented by Dr. Junhao Liang 




Speakers
avatar for Changsong Zhou

Changsong Zhou

Professor, Physics, Hong Kong Baptist University
Dr. Changsong Zhou, Professor, Department of Physics, Director of Centre for Nonlinear Studies, Hong Kong Baptist University (HKBU). Dr. Zhou’s research interest is dynamical processes on complex systems. His current emphasis is on analysis and modeling connectivity and activity... Read More →



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 06

7:00pm CEST

P32: Contribution of the Na/K pump to rhythmic bursting.
https://zoom.us/j/98077215494?pwd=dTlhbml4TFd2M2hvWGVFSlV2RXJHQT09
Meeting ID: 980 7721 5494
Password: 916401

Ronald Calabrese
, Ricardo Javier Erazo Toscano, Parker J. Ellingson, Gennady Cymbalyuk

The Na/K pump, often thought of as a background function in neuronal activity, contributes an outward current (IPump) that responds to the internal concentration of Na+ ([Na+]i). In bursting neurons, such as those found in central pattern generators (CPGs) that produce rhythmic movements, one can expect the [Na+]i and thus IPump to vary throughout the burst cycle [1,2,3]. This variation with electrical activity and the independence from membrane potential endow IPump with dynamical properties not available in channel-based currents (e.g. voltage- or transmitter- gated, or leak channels). Moreover, in many neurons the pump’s activity is modulated by a variety of modulators further expanding the potential role of IPump in rhythmic bursting activity [4]. Using a combination of experiment, modeling, and hybrid systems analyses, we have sought to determine how IPump and its modulation influence rhythmic activity in a CPG.

Teaser:https://youtu.be/vtt3qtsu8hQ


Speakers
avatar for Ronald Calabrese

Ronald Calabrese

Department of Biology, Emory University



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 16

7:00pm CEST

P35: Through synapses to spatial memory maps: a topological model

Join Zoom Meeting
https://us04web.zoom.us/j/8808879442?pwd=ZFdaTjdPNExkVkdqeEFncUx5UVJxdz09

Meeting ID: 880 887 9442
Passcode: XC9V1u

Yuri Dabaghian
Learning and memory are fundamentally collective phenomena, brought into existence by highly organized spiking activity of large ensembles of cells. Yet, linking the characteristics of the individual neurons and synapses to the properties of large-scale cognitive representations remains a challenge: we lack conceptual approaches for connecting the neuronal inputs and outputs to the integrated results at the ensemble level. For example, numerous experiments point out that weakening of the synapses correlates with weakening of memory and learning abilities—but how exactly does it happen? If, e.g., the synaptic strengths decrease on average by 5%, then will the time required to learn a particular navigation task increase by 1%, by 5% or by 50%? How would the changes in learning capacity depend on the original cognitive state? Can an increase in learning time, caused by a synaptic depletion, be compensated by increasing the population of active neurons or by elevating their spiking rates? Answering these questions requires a theoretical framework that connects the individual cell outputs and the large-scale cognitive phenomena that emerge at the ensemble level.

We propose a modeling approach that allows bridging the “semantic gap” between electrophysiological parameters of neuronal activity and the characteristics of spatial learning, using techniques from algebraic topology. Specifically, we study influence of synaptic transmission probability and the effects of synaptic plasticity on the hippocampal network’s ability to produce a topological cognitive map of the ambient space. We simulate deterioration of spatial learning capacity as a function of synaptic depletion in the hippocampal network to get a better insight into the spatial learning deficits (as observed, e.g., in Alzheimer’s disease) and understanding why development of these deficits may correlate with changes in the number of spiking neurons and/or of their firing rates, variations in the “brain wave” frequency spectra, etc. The results shed light on the principles of spatial learning in plastic networks and may help our understanding of neurodegenerative conditions.

Speakers
avatar for Yuri Dabaghian

Yuri Dabaghian

Neurology, The University of Texas McGovern Medical School at Houston



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 01

7:00pm CEST

P36: Robust spatial memories encoded by transient neuronal networks
Join Zoom Meeting
https://us04web.zoom.us/j/8808879442?pwd=ZFdaTjdPNExkVkdqeEFncUx5UVJxdz09

Meeting ID: 880 887 9442
Passcode: XC9V1u

Yuri Dabaghian
Тhe principal cells in mammalian hippocampus encode an internalized representation of the environment—the hippocampal cognitive map, that underlies spatial memory and spatial awareness. However, the synaptic architecture of the hippocampal network is dynamic: it contains a transient population of “cell assemblies”—functional units of the hippocampal computations—that emerge among the groups of coactive neurons and may disband due to reduction or cessation of spiking activity, then reappear, then disband again, etc. Electrophysiological studies in rats and mice suggest that the characteristic lifetimes of typical hippocampal cell assemblies range between minutes to tens of milliseconds. In contrast, cognitive representations sustained by the hippocampal network can last in rodents for months, which raises a principal question: how can a stable large-scale representation of space emerge from a rapidly rewiring neuronal stratum? We propose a computational approach to answering this question based on Algebraic Topology techniques and ideas. By simulating the place cell spiking activity during the rat’s exploratory movements through different environments and testing the stability of the resulting large-scale neuronal maps, we find that the networks with “flickering” architectures can reliably capture the topology of the ambient spaces. Moreover, the model suggests that the information is processed at three principal timescales, which roughly correspond to the short term, intermediate term and the long-term memories. The rapid rewiring of the local network connections occurs at the fastest timescale. The timescale at which the large-scale structures defining the shape of the cognitive map may fluctuate is by about an order of magnitude slower than the timescale of the information processing at the synaptic level. Lastly, an emerging stable topological base provides lasting, qualitative information about the environment, which remains robust despite the ongoing transience of the local connections.
https://us04web.zoom.us/j/8808879442?pwd=ZFdaTjdPNExkVkdqeEFncUx5UVJxdz09

Speakers
avatar for Yuri Dabaghian

Yuri Dabaghian

Neurology, The University of Texas McGovern Medical School at Houston



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 01

7:00pm CEST

P37: A Realistic Spatial Model of the Complete Synaptic Vesicle Cycle
Andrew Gallimore, Iain Hepburn, Erik De Schutter

The release of neurotransmitters from synaptic vesicles is the fundamental mechanism of information transmission between neurons in the brain. The entire synaptic vesicle cycle involves a highly complex interplay of proteins that direct vesicle docking at the active zone, the detection of intracellular calcium levels, fusion with the presynaptic membrane, and the subsequent retrieval of the vesicle protein material for recycling[1]. Despite its central importance in many aspects of neuronal function, and even though computational models of subcellular neuronal processes are becoming increasingly important in neuroscience research, realistic models of the synaptic vesicular cycle are almost non-existent. This is largely because the modeling tools for detailed spatial modeling of vesicles are not available.

Extending the STEPS simulator[2], we have pioneered spherical ‘vesicle’ objects that occupy a unique excluded volume and sweep a path through the tetrahedral mesh as they diffuse through the cytosol. Our vesicles incorporate endo- and exocytosis, fusion with and budding from intracellular membranes, neurotransmitter packing, as well as interactions between vesicular proteins and cytosolic and plasma membrane proteins. This allows us to model all key aspects of the synaptic vesicle cycle, including docking, priming, calcium detection and vesicle fusion, as well as dynamin-mediated vesicle retrieval and recycling.

Using quantitative measurements of protein copy numbers[3], membrane and cytosolic diffusion rates, protein-protein interactions, and an EM-derived spatial model of a hippocampal pyramidal neuron, we used this technology to construct the complete synaptic vesicle cycle at the Schaffer Collateral–CA1 synapse at an unprecedented level of spatially-realistic and biochemical detail (Fig. 1). We envisage that this new modeling technology will open up pioneering research into all aspects of neural function in which synaptic transmission plays a role.

1\. Sudhof TC. The molecular machinery of neurotransmitter release (Nobel lecture). Angew. Chem. Int. Ed. 2014, 53, 12696-12717 ****

2\. Hepburn I, Chen W, Wils S, De Schutter E. STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies. BMC Syst. Biol. 2012, 6, 36

3\. Wilhelm BG, Mandad S, Truckenbrodt S, et al. Composition of isolated synaptic boutons reveals the amounts of vesicle trafficking proteins. Science. 2014, 344(6187), 1023-1028

Speakers
AG

Andrew Gallimore

Computational Neuroscience Unit, Okinawa Institute of Science and Technology


Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 03

7:00pm CEST

P38: Priors based on abstract rules modulate the encoding of pure tones in the subcortical auditory pathway
Meeting room: https://tu-dresden.zoom.us/j/97802507999?pwd=QTFUNnhXRmZwbnZhN2U3R1lhWG9EUT09 (room 97802507999, psw: cns-2020)

Alejandro Tabas, Glad Mihai, Stefan Kiebel, Robert Trampel, Katharina von Kriegstein
Sensory pathways efficiently transmit information by adapting the neural responses to the local statistics of the sensory input. The predictive coding framework suggests that sensory neurons constantly match the incoming stimuli against an internal prediction derived from a generative model of the sensory input. Although predictive coding is generally accepted to underlay cortical sensory processing, the role of predictability in subcortical sensory coding is still unclear. Several studies have shown that single neurons and neuronal ensembles of the subcortical sensory pathway nuclei exhibit stimulus specific adaptation (SSA), a phenomenon where neurons adapt to frequently occurring stimuli (standards) yet show restored responses to a stimulus with deviating characteristics from the standard (deviant). Although neurons showing SSA are often interpreted as encoding prediction error, computational models to date have successfully explained SSA in terms of local network effects based on synaptic fatigue. Here, we first introduce a novel experimental paradigm where abstract rules are used to manipulate predictability. 19 human participants listened to sequences of pure tones consisting on seven standards and one deviant while we recorded mesoscopic responses in auditory thalamus and auditory midbrain using 7-Tesla functional MRI. In each sequence, the deviant was constrained to occur once and only once, and always in locations 4, 5 or 6. Although the three locations were equiprobable at the beginning of the trial, the conditional probability of hearing a deviant in location n after hearing n-1 standards is 1/3, 1/2, and 1, for deviant locations 4, 5, and 6, respectively. This paradigm yields different outcomes for habituation and predictive coding: if adaptation is driven by local habituation only, the three deviants should elicit similar neuronal responses; however, if it is predictive coding that entails adaptation, the neuronal responses to each deviant should depend on their abstract predictability. Our data showed that the responses to the deviants were strongly driven by abstract expectations, indicating that predictive coding is the main mechanism underlying mesoscopic SSA in the subcortical pathway. These results are robust even at the single-subject level. Next, we developed a new model of pitch encoding for pure tones following the main directives of predictive coding. The model comprises two layers whose dynamics reflect two different levels of abstraction. The lower layer receives its inputs from the auditory nerve and makes use of the finite bandwidth of the peripheral filters to decode pitch fast and robustly. The second layer holds a sparse representation that integrates the activity in the first layer only once the pitch decision has been made. Top-down afferents from the upper layer reinforce the pitch decision and accept the inclusion of priors that facilitate the decoding of predictable tones. Without the inclusion of priors, the model explains the key elements of SSA in animal recordings at the single-neuron level, as well as the main phenomenology of its mesoscopic representation. The inclusion of priors reflecting the abstract rules described in our paradigm facilitates the decoding of tones according to their predictability, effectively modulating the responses at the mesoscopic level. This modulation affects the mesoscopic fields generated during pitch encoding, fully explaining our experimental data.

Speakers
avatar for Alejandro Tabas

Alejandro Tabas

Postdoc, Technische Universität Dresden



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 03

7:00pm CEST

P39: Simulating cell to cell interactions during the cerebellar development
Join Zoom Meeting:
https://oist.zoom.us/j/98913735621?pwd=SnRDcS9QZzVCTWZFQlpodGtyMG8yZz09
Password: 856877

If you have further comments and questions about my poster, please contact me: mizuki.kato[at]oist.jp

Mizuki Kato
, Erik De Schutter

The cerebellum is involved in both motor and non-motor functions in the brain. Any deficit during its development has been suggested to trigger ataxia as well as various psychiatric disorders.

During the development of both human and mouse cerebella, precursors of one of the main excitatory neurons, granule cells, first accumulate in the external granule layer on the surface and subsequently migrate down to the bottom of cerebellar cortex. In addition to the massive soma migration, these granule cell precursors also descend their axons through the migratory paths which further branch into parallel fibers, making the environment even more crowded. Although palisade-like Bergmann glia physically guide granule cells during the migration, mechanisms about how these two cell types interact to manage the migration through such a shambolic environment are still unclear.

Rodent cerebella have been widely used as subjects in experimental studies, and have provided great pictures of granule cells and Bergmann glia. However, technical limitations still hinder the observation of cerebellar development both as populations and in a continuous manner. Building a computational model by a reverse-engineering process which integrates available biological observations will be essential to point out differences in the developmental dynamics between normal and abnormal cerebellum.

Most computational models for simulating neuronal development have focused on intracellular factors of single cell types. Although models simulating limited environmental factors exist, models for cell-cell interactions during neuronal development are rare. Alternatively, we used new computational framework, NeuroDevSim, to simulate populations of granule cells and Bergmann glia during cerebellar development.

NeuroDevSim evolved from NeuroMaC [1] and so far is the only active software that can simultaneously simulate developmental dynamics of different types of neurons at population-scale.

The goal structure of simulation with NeuroDevSim comprises 3,000 granule cells and 200 Bergmann glia in a 1x10^6 µm^3 regular cube, calculated by assuming a cube of mice cerebellar cortex. 26 Purkinje cell somas are also introduced as interfering spherical objects. Their dendritic development will be included in the future. At current stage of the simulation, reduced systems are used, aiming to direct the traffic of granule cell somas and to navigate their axonal growth.

The resulted model will enable visualization of massive migration dynamics of cerebellar granule cells with growing parallel fibers and of their phenomenological interactions with Bergmann glia. This model will provide new insight to understand developmental dynamics of cerebellar cortex.

Reference

1\. Torben-Nielsen, B. and E. De Schutter (2014). "Context-aware modeling of neuronal morphologies." Front Neuroanat 8 : 92.

Speakers
MK

Mizuki Kato

PhD Student, Computational Neuroscience Unit, Okinawa Institute of Science and Technology



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 04

7:00pm CEST

P40: Modeling multi-state molecules with a pythonic STEPS interface
Jules Lallouette, Erik De Schutter

Molecules involved in biological signaling pathways can, in some cases, exist in a very high number of different functional states. Well-studied examples include the Ca2+/calmodulin dependent protein kinase II (CaMKII), or receptors from the ErbB family. Through a combination of binding sites, phosphorylation sites, and polymerization, these molecules can form complexes that reach exponentially increasing numbers of distinct states. This phenomenon of combinatorial explosion is a common obstacle when trying to establish detailed models of signaling pathways.

Classical approaches to the stochastic simulation of chemical reactions require the explicit characterization of all reacting species and all associated reactions. This approach is well suited to population based methods in which there are a relatively low number of different molecules that can be present at relatively high concentrations. Since each state of multi-state complexes would however have to be modeled as a distinct specie, the combinatorial explosion that we mentioned earlier makes these approaches inapplicable.

Two separate problems need to be tackled: the "specification problem" which requires a higher level of abstraction in the definition of complexes and reactions ; and the "computation problem" that requires efficient methods to simulate the time evolution of reactions involving multi-state complexes. Rule based modeling (RBM) [2] tackles the former problem by allowing modelers to write "template" reactions that only contain the parts of the complexes actually involved in the reaction. Network-free methods together with particle-based methods [4] usually tackle the latter problem by only considering the states and reactions that are accessible from the current complex states and thus avoiding the computation of the full reaction network.

STEPS is a spatial stochastic reaction-diffusion simulation software that implements population based methods to simulate reaction-diffusion processes on realistic tetrahedral meshes [3]. In an effort to tackle the "specification problem" in STEPS, we present in this poster a novel, more pythonic, interface to STEPS that allows intuitive declaration of both classical and multi-state complexes reactions. Significant emphasis was put on simplifying model declaration, data access, and data saving during simulations.

To specifically tackle the "computation problem" in STEPS, we present a hybrid population/particle based method to simulate reactions involving multi-state complexes. By preventing the formation of arbitrarily structured macromolecules, we lower the computational cost of the pattern matching step necessary to identify potential reactants [1]. This diminished computational cost allows us to simulate larger spatial systems. We discuss these performance improvements and present examples of stochastic spatial simulations involving 12 subunits CamKII complexes which would have previously been intractable in STEPS.

References

1\. M. L. Blinov, et al. Graph theory for rule-based modeling of biochemical networks. Lect. Notes Comput. Sc., 2006.

2\. L. A. Chylek, et al. Innovations of the rule-based modeling approach. Systems Biol., 2013

3\. I. Hepburn, et al. Steps: efficient simulation of stochastic reaction–diffusion models in realistic morphologies. BMC Syst. Biol., 2012.

4\. J. J. Tapia, et al. Mcell-r: A particle-resolution network-free spatial modeling framework. In Modeling Biomolecular Site Dynamics, Springer, 2019.

Speakers
JL

Jules Lallouette

Computational Neuroscience Unit, Okinawa Institute of Science and Technology


Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 07

7:00pm CEST

P41: Reaction-diffusion simulations of astrocytic Ca2+ signals in realistic geometries
Audrey Denizot, Corrado Calì, Weiliang Chen, Iain Hepburn, Hugues Berry, Erik De Schutter

 Here is the link to the virtual room of the presentation:  https://oist.zoom.us/j/97032096119?pwd=ZnRoeWkvNnZGaER6M3dqMzlJZmVBdz09   Password: 429623

 2-minute teaser: https://youtu.be/KawxI1RiMrM

Astrocytes, glial cells of the central nervous system, display a striking diversity of Ca2+ signals in response to neuronal activity. 80% of those signals take place in cellular ramifications that are too fine to be resolved by conventional light microscopy [1], often in apposition to synapses (perisynaptic astrocytic processes, PAPs). Understanding Ca2+ signaling in PAPs, where astrocytes potentially regulate neuronal information processing [2], is crucial. At this spatial scale, Ca2+ signals are not distributed uniformly, being preferentially located in so-called Ca2+ hotspots [3], suggesting the existence of subcellular spatial domains. However, because of the spatial scale at stake, little is currently known about the mechanisms that regulate Ca2+ signaling in fine processes. Here, we investigate the geometry of the endosplamic reticulum (ER), the predominant astrocytic Ca2+ store, using electron microscopy. Contrary to previous reports [4], we detect ER in PAPs, which can be as close as ~60nm to the closest postsynaptic density. We use computational modeling to investigate the impact of the observed cellular and ER geometries on Ca2+ signaling. Simulations using the stochastic voxel-based model from Denizot et al [5], both in simplified and in realistic 3D geometries, reproduce spontaneous astrocytic microdomain Ca2+ transients measured experimentally. In our simulations, the effect of the clustering of IP3R channels observed in 2 spatial dimensions [5] is still valid in a simple cylinder geometry but no longer holds in complex realistic geometries. We propose that those discrepancies might result from the geometry of the ER and that, in 3 spatial dimensions, the effects of molecular distributions (such as e.g IP3R clustering) are particularly enhanced at ER- plasma membrane contact sites. Our results suggest that the predictions from simulations in 1D, 2D or simplified 3D geometries should be cautiously interpreted. Overall, this work provides a better understanding of IP3R-dependent Ca2+ signals in fine astrocytic processes and more generally in subcellular compartments, a prerequisite for understanding the dynamics of Ca2+ hotspots, which are deemed essential for local intercellular communication.

References
[1] Bindocci, E., Savtchouk, I., Liaudet, N., et al. “Three-dimensional Ca2+ imaging advances understanding of astrocyte biology,” Science, May 2017, vol. 356, no. 6339, p. eaai8185. [2] Savtchouk, I., Volterra, A. “Gliotransmission: Beyond Black-and-White,” J. Neurosci., Jan. 2018, vol. 38, no. 1, pp. 14–25.
[3] Thillaiappan, N. B., Chavda, A., Tovey, S., et al. “Ca2+ signals initiate at immobile IP3 receptors adjacent to ER-plasma membrane junctions,” Nat. Commun., Dec. 2017 , vol. 8.
[4] Patrushev, I., Gavrilov, N., Turlapov, V., et al. “Subcellular location of astrocytic calcium stores favors extrasynaptic neuron-astrocyte communication,” Cell Calcium, Nov. 2013, vol. 54, no. 5, pp. 343–349.
[5] Denizot, A., Arizono, M., Nägerl, U. V., et al. “Simulation of calcium signaling in fine astrocytic processes: Effect of spatial properties on spontaneous activity,” PLOS Comput. Biol., Aug. 2019, vol. 15, no. 8, p. e1006795.


Speakers
avatar for Audrey Denizot

Audrey Denizot

Junior Researcher, INRIA



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 09

7:00pm CEST

P42: A nanometer range reconstruction of the Purkinje cell dendritic tree for computational models
Mykola Medvidov, Weiliang Chen, Christin Puthur, Erik De Schutter

Purkinje neurons are used extensively in computational neuroscience [1]. However, despite extended knowledge about Purkinje cell morphology and ultrastructure, the complete dendritic tree of Purkinje cell as well as the complete dendritic tree of other types of neurons was never reconstructed at nanometer range resolution due to the cells size and complexity. At the same time, the use of real Purkinje cell dendritic tree morphology may be very important for computational models. Considering the development of new instruments and imaging techniques that nowadays allow reconstruction of large volumes of the neuronal tissue the main goal of our project is to reconstruct a dendritic tree of a Purkinje cell with all its dendritic spines and synapses.

Serial Block Face Microscope (SBF) is widely used to examine large volume of neuronal tissue with nanometer range resolution [2]. To obtain volume data perfused mouse brains were processed for SBF imaging using OTO staining techniques and the best quality cerebellum slice was imaged on FEI Teneo VS Electron Microscope with pixel resolution 8x8x60 nm. An imaged volume of approximately 2.2 Terapixel was processed and aligned with Image J and Adobe Photoshop. To reconstruct the Purkinje cell dendritic tree the imaged volume was first analyzed to locate the most appropriate full cell inside the imaged volume. Second, the volume containing the cell was segmented with Ilastik [https://www.ilastik.org](https://www.ilastik.org/) and Tensor Flow deep learning network https://github.com/tensorflow. The super-pixels were fused with custom made software to generate a dendritic tree represented by 3d voxels. Next, a 3d surface mesh was generated based on 3d voxels array using the marching cubes algorithm [https://github.com/ilastik/marching_cubes](https://github.com/ilastik/marching_cubes) and the resulting mesh was processed with MeshLab to generate a final surface mesh. Finally, a tetrahedral volume mesh was generated with the TetWild software [https://github.com/Yixin-Hu/TetWild](https://github.com/Yixin- Hu/TetWild). The resulting tetrahedral mesh of Purkinje cell full dendritic tree including cell body and initial axonal segment will be used to run large scale stochastic models using the parallel STochastic Engine for Pathway Simulation [3] (STEPS) http://steps.sourceforge.net.

References

1. Zang Y, Dieudonne S, De Schutter E. Voltage- and Branch-Specific Climbing Fiber Responses in Purkinje Cells. _Cell Reports_ 2018, 24, 1536–1549. 2. Titze B, Genoud C. Volume scanning electron microscopy for imaging biological ultrastructure. _Biology of the Cell_ 2016, 108, 307-323. 3. Chen W, De Schutter E. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers. _Frontiers in Neuroinformatics_ 2017, 11, 1-15.

Speakers
MM

Mykola Medvidov

Computation Neuroscience Unit, Okinawa Institute of Science and Technology


Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 17

7:00pm CEST

P47: Brain Dynamics and Structure-Function Relationships via Spectral Factorization and the Transfer Function
Video Conference link https://meet.google.com/ykq-pjkm-yeg

  James Henderson, Peter Robinson, Mukesh Dhamala
The relationships between brain activity and structure are of central importance to understanding how the brain carries out its functions and to interrelating and predicting different kinds of experimental measurements. The aim of this work is to describe the transfer function and its relationships to many existing forms of brain analysis. Then, to describe methods for obtaining the transfer function, with emphasis on spectral factorization using the Wilson algorithm [1,2] applied to correlations of time series measurements. The transfer function of a system contains complete information about its linear properties, responses, and dynamics. This includes relationships to impulse responses, spectra, and correlations. In the case of brain dynamics, it has been shown that the transfer function is closely related to brain connectivity, including time delays, and we note that linear coupling is widely used to model the spatial interactions of locally nonlinear dynamics. It is shown how the brain's linear transfer function provides a means of systematically analyzing brain connectivity and dynamics, providing a robust way of inferring connectivity, and activity measures such as spectra, evoked responses, coherence and causality, all of which are widely used in brain monitoring. Additionally, the eigenfunctions of the transfer function are natural modes of the system dynamics and thus underlie spatial patterns of excitation in the cortex. Thus, the transfer function is a suitable object for describing and analyzing the structure-function relationship in brains. The Wilson spectral factorization algorithm is outlined and used to efficiently obtain linear transfer functions from experimental two-point correlation functions. Criteria for time series measurements are described for the algorithm to accurately reconstruct the transfer function, including comparing the algorithm's theoretical computational complexity with empirical runtimes for systems of similar size to current experiments. The algorithm is applied to a series of examples of increasing complexity and similarity to real brain structure in order to test and verify that it is free of numerical errors and instabilities (and modifying the method where required to ensure this). The results of applying the algorithm to a 1D test case with asymmetry and time delays is shown in (Fig. 1). The method is tested on increasingly realistic structures using neural field theory, introducing time delays, asymmetry, dimensionality, and complex network connectivity to verify the algorithm's suitability for use on experimental data. Acknowledgements This work was supported by the Australian Research Council under Center of Excellence grant CE140100007 and Laureate Fellowship grant FL140100025. References 1\. Dhamala M, Rangarajan G, and Ding M: Estimating Granger causality from Fourier and wavelet transforms of time series data. Phys. Rev. Lett. 2008, 100:018701. 2\. Wilson, GT: The Factorization of Matrical Spectral Densities. SIAM J. Appl. Math. 1972, 23: 420

Speakers
avatar for James Henderson

James Henderson

Postdoc, School of Physics, The University of Sydney
I'm interested in the intersection of neuroscience and A.I., so I enjoy discussing topics like plasticity, learning, deep learning and neural dynamics.



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 12

7:00pm CEST

P48: Large-scale spiking network models of primate cortex as research platforms
Google Meet link: meet.google.com/jip-hcfp-fhb

Sacha van Albada, Aitor Morales-Gregorio, Alexander van Meegen, Jari Pronold, Agnes Korcsak-Gorzo, Hannah Vollenbröker, Rembrandt Bakker, Stine Brekke Vennemo, Håkon Mørk, Jasper Albers, Hans Ekkehard Plesser, Markus Diesmann
Despite the wide variety of available models of the cerebral cortex, a unified understanding of cortical structure, dynamics, and function at different scales is still missing. Key to progress in this endeavor will be to bring together the different accounts into unified models. We aim to provide a stepping stone in this direction by developing large-scale spiking neuronal network models of primate cortex that reproduce a combination of microscopic and macroscopic findings on cortical structure and dynamics. A first model describes resting-state activity in all vision-related areas in one hemisphere of macaque cortex [1, 2], representing each of the 32 areas with a 1 mm² microcircuit [3] with the full density of neurons and synapses. Comprising about 4 million leaky integrate-and-fire neurons and 24 billion synapses, it is simulated on the Jülich supercomputers. The model has recently been ported to NEST 3, greatly reducing the construction time. The inter-area connectivity is based on axonal tracing [4] and predictive connectomics [5]. Findings reproduced include the spectrum and rate distribution of V1 spiking activity [6], feedback propagation of activity across the visual hierarchy [7], and a pattern of functional connectivity between areas as measured with fMRI [8]. The model is available open-source on and uses the tool Snakemake [9] for formalizing the workflow from the experimental data to simulation, analysis, and visualization. It serves as a platform for further developments, including an extension with motor areas [10] for studying visuo-motor interactions, incorporating function using a learning-to-learn framework [11], and creating an analogous model of human cortex [12]. It is our hope that this work will contribute to an increasingly unified understanding of cortical structure, dynamics, and function.

Acknowledgments

EU Grants 269921 (BrainScaleS), 604102 (HBP SGA1), 785907 (HBP SGA2), HBP SGA ICEI 800858; VSR computation time grant JINB33; DFG SPP 2041.

References
1. Schmidt M, Bakker R et al. Brain Struct Func 2017, 223, 1409–1435
2. Schmidt M, Bakker R et al. PLOS CB 2018, 14, e1006359
3. Potjans TC, Diesmann M. Cereb Cortex 2014, 24, 785 –806
4. Bakker R, Wachtler T et al. Front Neuroinform 2012, 6, 30
5. Hilgetag CC, Beul SF et al. Netw Neurosci 2019, 3, 905–923
6. Chu CCJ, Chien PF et al. Vision Res 2014, 96, 113–132
7. Nir Y, Staba RJ et al. Neuron 2011, 70, 153–169
8. Babapoor-Farrokhran S, Hutchison RM et al. J Neurophysiol 2013, 109, 2560–2570
9. Köster J, Rahmann S. Bioinformatics 2012, 28, 2520–2522
10. Morales-Gregorio A, Dabrowska P et al. Bernstein Conf 2019
11. Korcsak-Gorzo A, van Meegen A et al. Bernstein Conf 2019
12. Pronold J, van Meegen A et al. NEST Conf 2019

Speakers
avatar for Sacha van Albada

Sacha van Albada

Institute of Neuroscience and Medicine (INM-6, INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 15

7:00pm CEST

P5: Mathematical modeling of the circadian clock in spiders
Virtual Room: https://meet.google.com/zot-ggko-ezm

Contributors: Daniel Robb
, Lu Yang, Nadia Ayoub, Darrell Moore, Thomas JonesNatalia Toporikova

We consider the unusual properites of the spider circadian clock. Using a plausible model for the cellular clock, we make experimental predictions that can help to distinguish between two possible mechanisms underlying these unusual properties. The mechanisms consist of a 'weak oscillator' , i.e., an oscillator easily affected by the typical neural input generated by light,  and 'strong stimulus', i.e. an amplified neural input which can affect an oscillator of normal strength. Though both mechanisms are possible, an initial experimental result points toward the weak oscillator scenario.

Speakers
avatar for Daniel Robb

Daniel Robb

Associate Professor, Department of Mathematics, Computer Science and Physics, Roanoke College



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 09

7:00pm CEST

P64: Gamma oscillations organized as localized burst patterns with anomalous propagation dynamics in primate cerebral cortex
Virtual room: meet.google.com/ykh-nfmk-gpa


Xian Long
, Yuxi Liu, Paul R. Martin, Samuel G. Solomon, Pulin Gong

Gamma oscillations (30-80 Hz) occur in transient bursts with varying frequencies and durations. These non-stationary gamma bursts have been widely observed in many brain areas but have rarely been quantitatively characterized, and the mechanisms that produce them are not understood. In this study we investigate the spatiotemporal properties of gamma bursts through combined empirical and modeling investigation. Our array recordings of local field potentials in visual cortical area MT of the marmoset monkey reveal that gamma bursts form localized patterns with complex propagation dynamics. We also show that the propagations of these patterns are characterized by anomalous dynamics that are fundamentally different from regular or Brownian motions conventionally assumed. We show that all aspects of these anomalous dynamics can be quantitatively captured by a spatially extended, biophysically realistic circuit model. Circuit dissection of the model shows further that the anomalous dynamics rely on the intrinsic meta- stability near the critical transition between different circuit states (i.e. between synchronous and the regular propagating wave states). Our results thus reveal novel spatiotemporal organization properties of gamma bursts, and explain them in terms of underlying circuit mechanisms, providing new computational functions for gamma oscillations.

****____~~~~

Speakers
avatar for Xian Long

Xian Long

PhD student, The school of physics, University of Sydney, Australia
Specialised in data analysis and interested in spiking neural network.


poster pdf

Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 18

7:00pm CEST

P65: Dynamical circuit mechanisms of attentional sampling
Guozhang Chen, Pulin Gong

Selective attention can sift out particular objects or features from the plethora of stimuli. Such preferential processing of attention is often compared to a spotlight pausing to illuminate relevant targets in visual fields in a stimulus-driven way (bottom-up attention) and/or task-driven way (top-down attention). Recent studies have revealed that bottom-up distributed attention involving multiple objects is not a sustained spotlight, but samples the visual environment in a fundamentally dynamical manner with theta-rhythmic cycles, with each sampling cycle being implemented through gamma oscillations. However, the fundamental questions regarding the dynamical nature and the circuit mechanism underlying such dynamical attentional sampling remain largely unknown. To address these questions, in this study we investigate a biophysically plausible cortical circuit model of spiking neurons and find that in the working regime of the model (i.e. the regime near the critical transition between the asynchronous and propagating wave states), the localized activity pattern emerging from the circuit exhibits rich spatiotemporal dynamics. We illustrate that the nonequilibrium nature of the localized pattern enables the circuit to dynamically shift to different salient external inputs, without introducing additional neural mechanisms such as inhibition of return as in the conventional winner-take-all models of attention. We elucidate that the dynamical shifting process of the activity pattern provides a mechanistic account of key neurophysiological and behavioral findings on attention, including theta oscillations, theta-gamma phase-amplitude coupling, and vigorous-faint spiking fluctuations. Furthermore, by using the saliency maps of natural stimuli, we demonstrate that the nonequilibrium activity pattern dynamics can better explain the psychophysical findings regarding attention maps and attention sampling paths than the conventional models, providing a profound computational advantage for efficiently sampling external environments. Our work thus establishes a novel circuit mechanism by which non-equilibrium, fluctuating pattern dynamics near the critical transition of circuit states can be exploited for implementing efficient attentional sampling.

Speakers
GC

Guozhang Chen

School of Physics, The University of Sydney


Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 19

7:00pm CEST

P76: Inferring a simple mechanism for alpha-blocking by fitting a neural population model to EEG spectra
Virtual room: https://meet.google.com/rhp-phbr-vmg
Publication: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007662

Alpha blocking, a phenomenon where the alpha rhythm is reduced by attention to a visual, auditory, tactile or cognitive stimulus, is one of the most prominent features of human electroencephalography (EEG) signals. Here we identify a simple physiological mechanism by which opening of the eyes causes attenuation of the alpha rhythm. We fit a neural population model to EEG spectra from 82 subjects, each showing different degrees of alpha blocking upon opening of their eyes. Although it is notoriously difficult to estimate parameters from fitting such models, we show that, by regularizing the differences in parameter estimates between eyes-closed and eyes-open states, we can reduce the uncertainties in these differences without significantly compromising fit quality. From this emerges a parsimonious explanation for the spectral changes between states: Just a single parameter, pei, corresponding to the strength of a tonic, excitatory input to the inhibitory population, is sufficient to explain the reduction in alpha rhythm upon opening of the eyes. When comparing parameter estimates across different subjects we find that the inferred differential change in pei for each subject increases monotonically with the degree of alpha blocking observed. In contrast, other parameters show weak or negligible differential changes that do not scale with the degree of alpha attenuation in each subject. Thus most of the variation in alpha blocking across subjects can be attributed to the strength of a tonic afferent signal to the inhibitory cortical population.

Contributors: Agus HartoyoPeter CaduschDavid LileyDamien Hicks

Speakers
avatar for Agus Hartoyo

Agus Hartoyo

Ph.D. student, Optical Sciences Centre, Swinburne University of Technology



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 08

7:00pm CEST

P79: Frequency-dependent synaptic gain in a computational model of mouse thoracic sympathetic postganglionic neurons
INFORMATION FOR P79 POSTER PRESENTATION ZOOM MEETING:
Join Zoom Meeting at
https://us02web.zoom.us/j/81765615367?pwd=alplenZxREVZVmJUdnlHa2ROV3c5Zz09
Meeting ID: 817 6561 5367
Password: 9V1TU1

THE ZOOM MEETING WILL BE OPEN 12:30PM - 04:30PM EASTERN DAYLIGHT SAVING TIME (ATLANTA TIME),
 WHICH IS 12:30PM - 04:30PM CENTRAL EUROPEAN SUMMER TIME (BERLIN TIME). IF YOU HAVE TROUBLE ENTERING THE MEETING, EMAIL astrid.prinz@emory.edu

Astrid Prinz, Michael McKinnon, Kun Tian, Shawn Hochman  
Postganglionic neurons in the thoracic sympathetic chain represent the final common output of the sympathetic nervous system. These neurons receive synaptic inputs exclusively from preganglionic neurons located in the spinal cord. Synaptic inputs come in two varieties: primary inputs, which are invariably suprathreshold, and secondary inputs, which exhibit a range of typically subthreshold amplitudes. Postganglionic neurons typically receive a single primary input and a variable number of secondary inputs in what has been described as an “n+1” connectivity pattern. Secondary inputs have often been viewed as inconsequential to cell recruitment due to the short duration of measured synaptic inputs and the relatively low tonic firing rate of preganglionic neurons in vivo. However, recent whole-cell patch clamp recordings reveal that thoracic postganglionic neurons have a greater capacity for synaptic integration than previous microelectrode recordings would suggest. This supports a greater role for secondary synapses in cell recruitment.

We previously created a conductance-based computational model of mouse thoracic postganglionic neurons. In the present study, we have expanded the single-cell model into a network model with synaptic inputs based on whole-cell recordings. We systematically varied the average firing rate of a network of stochastically firing preganglionic neurons and measured the resultant firing rate in simulated postganglionic neurons. Synaptic gain was defined as the ratio of postganglionic to preganglionic firing rate.

We found that for a network configuration that mimics the typical arrangement in mouse, low presynaptic firing rates (<0.1Hz) resulted a synaptic gain close to 1, while firing rates closer to 1Hz resulted in a synaptic gain of 2.5.  Synaptic gain diminished for firing rates higher than ~3Hz. We also determined that synaptic gain linearly increases with the number of secondary synaptic inputs (n) within the range of physiologically realistic presynaptic firing rate. Amplitude of secondary inputs also determines frequency-dependent synaptic gain, with a bifurcation where secondary synaptic amplitude equals recruitment threshold. We further demonstrate that the synaptic gain phenomenon depends on the preservation of passive membrane properties as determined by whole-cell recordings.

One major biological role of the sympathetic nervous system is the regulation of vascular tone in both skeletal muscle and cutaneous structures. The firing rate of muscle vasoconstrictor preganglionic neurons is modulated by the cardiac cycle, while cutaneous vasoconstrictor neurons fire independently of the cardiac cycle. We modulated preganglionic firing rate according to the typical mouse heart rate to determine if cardiac rhythmicity changes the overall firing rate of postganglionic neurons. Cardiac rhythmicity does not appear to have a significant impact on synaptic gain within the physiological range of preganglionic input.

Under normal physiological conditions, the unity gain of sympathetic neurons would lead to faithful transmission of central signals to peripheral targets. However, during episodes of high sympathetic activation, the postganglionic network can amplify central signals in a frequency-dependent manner. These results suggest that postganglionic neurons play a more active role in shaping sympathetic activity than previously thought.

 


Speakers
AP

Astrid Prinz

Associate Professor, Department of Biology, Emory University



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 20

7:00pm CEST

P80: Comp-NeuroFedora, a Free/Open Source operating system for computational neuroscience: download, install, research
Ankur Sinha, Aniket Pradhan, Qianqian Fang, Danny Lee, Danishka Navin, Alberto Rodriguez Sanchez, Luis Bazan, Luis M. Segundo, Alessio Ciregia, Zbigniew J˛edrzejewski-Szmek, Sergio Pascual, Antonio Trande, Victor Manuel Tejada Yau, Morgan Hough

Google Meet link: https://meet.google.com/rkx-siwq-dww


The promotion and establishment of Open Neuroscience[9] is heavily dependent on the availability of Free/Open Source Software (FOSS) tools that support the modern scientific process. While more and more tools are now being developed using FOSS driven methods to ensure free (as in freedom, and thus also free of cost) access to all, the complexity of these domain specific tools tends to hamper their uptake by the target audience---scientists hailing from multiple, sometimes non-computing, disciplines. The NeuroFedora initiative aims to shrink the chasm between the development of neuroscience tools and their usage[10].
Using the resources of the FOSS Fedora community[4] to implement current best practices in software development, NeuroFedora volunteers identify, package, test, document, and disseminate neuroscience software for easy usage on the general purpose Fedora Linux Operating System (OS). The result is the reduction of the installation/deployment process for this software to a simple two step process: install any flavour of the Fedora OS; install the required tools using the in-built package manager.

To make common computational neuroscience tools even more accessible, NeuroFedora now provides an OS image that is ready to download and use. In addition to a plethora of computational neuroscience software-Auryn[7], NEST[1], Brian[6], NEURON[2], GENESIS[3], Moose[5], Neurord[8], and others--- the image also includes various utilities that are commonly used along with modelling tools, such as the complete Python science stack. Further, since this image is derived from the popular Fedora Workstation OS, it includes the modern GNOME integrated application suite and retains access to thousands of scientific, development, utility, and other daily use tools from the Fedora repositories.

A complete list of available software can be found at the NeuroFedora documentation at neuro.fedoraproject.org. We invite students, trainees, teachers, researchers, and hobbyists to use Comp-NeuroFedora in their work and provide feedback. As a purely volunteer driven initiative, in the spirit of the Open Science and FOSS, we welcome everyone to participate, engage, learn, and contribute in whatever capacity they wish.

References

1. Linssen, C., Lepperød, M. E., Mitchell, J., Pronold, J., Eppler, J. M., et al. NEST 2.16.0 Aug. 2018-08.
2. Hines, M. L. & Carnevale, N. T. The NEURON simulation environment. Neural Computation 9, 1179–1209 (1997).
3. Bower, J. M., Beeman, D. & Hucka, M. The GENESIS simulation system (2003).
4. RedHat. Fedora Project 2008.
5. Dudani, N., Ray, S., George, S. & Bhalla, U. S. Multiscale modeling and interoperability in MOOSE. BMC Neuroscience 10, P54 (2009).
6. Goodman, D. F. M. & Brette, R. The brian simulator. Frontiers in neuroscience 3, 192 (2009).
7. Zenke, F. & Gerstner, W. Limits to high-speed simulations of spiking neural networks using general-purpose computers. Frontiers in neuroinformatics 8 (2014).
8. J˛edrzejewski-Szmek, Z. & Blackwell, K. T. Asynchronous τ-leaping. Journal of Chemical Physics 144, 125104 (2016).
9. Gleeson, P., Davison, A. P., Silver, R. A. & Ascoli, G. A. A Commitment to Open Source in Neuroscience. Neuron 96, 964–965 (2017).
10. Sinha, A., Bazan, L., Segundo, L. M., J˛edrzejewski-Szmek, Z., Kellner, C. J., et al. NeuroFedora: a ready to use Free/Open Source platform for Neuroscientists. English. BMC Neuroscience 20. ISSN : 1471-2202. https://neuro.fedoraproject.org (2019).

Speakers
avatar for Ankur Sinha

Ankur Sinha

Post doctoral research fellow, Silver Lab at University College London, UK



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 10

7:00pm CEST

P81: A Computational Neural Model of Pattern Motion Selectivity of MT Neurons
https://unimelb.zoom.us/j/99736655097?pwd=VFNuOGozMkpaU2JyeGpKdlpUb0JvQT09    Password: 813336

Parvin Zarei Eskikand
, David Grayden, Tatiana Kameneva, Anthony Burkitt, Michael Ibbotson

The middle temporal area (MT) within the extrastriate primate visual cortex contains a high proportion of direction-selective neurons. When the visual system is stimulated with plaid patterns, a range of cell-specific MT responses are observed. MT neurons that are selective to the direction of the pattern motion are called “pattern cells”, while those that respond optimally to the motion of the individual component gratings of the plaid pattern are called “component cells”. The current theory on the generation of pattern selectivity of MT neurons is based on a hierarchical relationship between component and pattern MT neurons, where the responses of pattern MT neurons result from the summation of the responses of component MT neurons [1]. Where the gratings cross in plaids, the crossing junctions of the gratings move in the pattern direction. However, revealing the ends of the moving gratings (terminators) in human perceptual experiments breaks the illusion of the direction of pattern motion: the true directions of motion of the gratings are perceived.

Here, we propose a biologically plausible model of MT neurons that uses as inputs the known properties of three types of cells in the primary visual cortex (V1): complex V1 neurons, end-stopped V1 neurons (which only respond to the end-points of the stimulus), and V1 neurons with suppressive extra- classical receptive fields. The receptive fields of the neurons are modelled as spatiotemporal filters. There are two types of MT neurons: integration MT neurons with facilitatory surrounds and segmentation MT neurons with antagonistic surrounds [2]. A neuron’s pattern or component selectivity is controlled by the relative proportions of the inputs from the three types of V1 neurons. The model provides a simple mechanism by which component and pattern selective cells can be described; the model does not require a hierarchical relationship between component and pattern MT cells.

The results show that the responses of the model MT neurons are highly dependent on two parameters: the excitatory input that the model neurons receive from the complex V1 neurons with extra-classical RFs and the inhibitory effect of the end-stopped neurons. The results also show experimentally observed contrast dependency of the pattern motion preference of MT neurons: the level of the pattern selectivity of MT neurons drops significantly when the contrast of the bars is reduced.

The presented model solves several problems associated with MT motion detection, such as overcoming the aperture problem and extracting the correct motion directions from crossing bars. Apart from the mechanism of the computation of the pattern motion by MT neurons, the model inherently explains several important properties of pattern MT neurons, including their temporal dynamics, the contrast dependency of pattern selectivity, and the spatial and temporal limits of pattern motion detection.

[1] Kumbhani RD, El-Shamayleh Y, Movshon JA (2015) Journal of Neurophysiology 113, 1977-1988.

[2] Zarei Eskikand P, Kameneva T, Burkitt AN, Grayden DB, Ibbotson MR (2019). Frontiers in Neural Circuits 13, 43.

Speakers
PZ

Parvin Zarei Eskikand

University of Melbourne



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 18

7:00pm CEST

P83: One-shot model to learn 7 quantal parameters simoultanously, including desensitization
Link

https://us02web.zoom.us/j/83233923571?pwd=RktnaEYvUFhMdC9CdDZLZ21QbVFvdz09

Password : 4nf9ju

 
Quantal Parameters
Synapses modulate their signal to maintain stability or decrease or increase their strength. To improve quantification of synaptic dynamics with focus on computational efficiency and number of parameters, we introduce an utltra-fast variant of the binomial model, applied to estimate θ={N,p,q,σ,τd,τf,τdes}.

The supremacy of this one-shot method allows to retrieve many parameters without requiring multiple batches of training [1]. The method is suitable to spontaneous in vivo electrophysiology which makes it more general. It is the first model incorporating desensitization τdes.
We present here our last results applied to mice living in an enriched environment vs. anesthetized with trichloroethanol.

Reference 
  
 [1] Barri A, Wang Y, Hansel D, Mongillo G. Quantifying Repetitive Transmission at Chemical Synapses: A Generative-Model Approach. eNeuro 2016, 3(2), 1-21

If you would like to write a paper together, you can contact me at myfirstname / dot / mylastname / at / imls /dot / uzh /dot / ch

Speakers
avatar for Emina Ibrahimovic

Emina Ibrahimovic

University of Zurich and Elvesys
You can talk to me about the network level, the effect of a single neuron or synapse on oscillations and synchronization with Lyapunov exponents and non linear dynamics.



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 06

7:00pm CEST

P85: The Neuron growth and death model and simulator
P85 Virtual Meeting room is  in following.

Attend the Zoom meeting 
https://zoom.us/j/8437029563

Meeting ID: 843 702 9563
Password: 3psgxH

You can see our  2-mins teaser on https://www.youtube.com/watch?v=Cprz2-DRCAQ&feature=youtu.be

if you have any problmes, please contact the following speaker.
Yuko Ishiwaka (yuko.ishiwaka@g.softbank.co.jp)

---
Yuko Ishiwaka
, Tomohiro Yoshida, Tadateru Itoh

Some kinds of characteristics of neurons depend on morphology. There are many neuron types in a brain and the functions of each cell type are varied. For example, auditory cells that receive sound stimulus from the external world and pyramidal cells that relate to thinking and memory have different functions. A bushy cell which is one of the sensory neurons of auditory treats tempo of sounds, therefore the immediate responses are required for producing action potential and short time refractory period. On the other hand, pyramidal cells that mainly exist in the hippocampus and amygdala treat memory and emotion, therefore the producing action potential is slower than sensor neurons and the refractory period is longer. Hodgkin and Huxley (H-H) equations can calculate action potentials based on ion channels. H-H does not consider morphology, however, on actual cell membranes, the number of exiting ion channels and locations are based on shapes of neurons. How quick or slow soma can produce action potentials are depend on how narrow and how many ion channels on the producing area. Therefore, we assume that there are strong relationships between cell shapes and characteristics of action potentials. Expanded H-H equations can adapt to the quickness of producing action potential by adding axon hillock parameters.

Connectivity between neurons is also important. Geometry varies according to cell types. Purkinje cells which are one of the inhibitory neurons have complex branches of the dendritic arbor. On the other hand, Pyramidal cells which are one of the excitatory neurons and multipolar types neurons have one axon and many dendrites but the complexity of geometry is simpler than Purkinje cells. These differentials of geometry cause differences in connectivity.

In this paper, we propose a new neuron growth and deal model and simulator considered neuron morphology and connectivity between multi cell types. In our model, a characteristic of a growth cone is applied to neuron growth and treated as a navigation system, an L-system is adapted for creating the geometry of each neuron, and Life game is embedded for a cell division rule.

We also adopt glial cells for neuron growth not only stimulus from other neurons. In our model, each neuron receives the energy for growing from contacted astrocytes which are one of the glial cells. The direction of growth of the growth cones has determined by set goal areas for far, and during growing, growth cones try to contract near oligodendrocytes to obtain myelin around their axons. A cell division rule for Oligodendrocytes follows life game rules. The glial cells are treated as obstacles.

In our simulation system, a user can create various types of neurons, set the goals for both dendrites and axons, create connections between various functions and geometries of neurons with growth rules and add injections such as IPSP or EPSP on purpose to calculate action potentials.

In conclusion, we show simulation results of our proposed model. In our simulator, variety of geometry can be produced automatically based on expanded L-system, variety and flexible connectivity can be also produced based on our proposed new neuron growth and death model. Furthermore, we added two types of glial cells for growth and goal rules and also treat as obstacles. Our proposed model and simulator is quite flexible to simulate cell geometry, action potentials, cell connections in each brain region.

Speakers
avatar for Yuko Ishiwaka

Yuko Ishiwaka

Senior Research Scientist, Advanced Technology Promotion OfficeIT & Network UnitInformation Technology Division Technology Unit Softbank Corp.
I have got my Ph.D. by a multiagent system. I was an assistant at Hakodate National College of Technology and moved to Hokkaido University as an associate professor. I am working at SoftBank Corp. as a researcher. My research interest is machine learning inspired by neuroscience to... Read More →



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 07

7:00pm CEST

P94: Brain rhythms enhance top-down influence on V1 responses in perceptual learning
Ryo Tani, Yoshiki Kashimori

Visual information is conveyed in a feedforward manner to progressively higher levels in the hierarchy, beginning with the analysis of simple attributes, such as orientation and contrast, and leading to more complex object features from one stage to the next. In contrast, visual systems have abundant feedback connections, whose number is even larger than the feedforward ones. Top-down influences, conveyed by the feedback pathways across entire brain areas, modulate the responses of neurons in early visual areas, depending on cognition and behavioral context. Li et al. [1] showed that top-down signals allowed neurons of the primary visual cortex (V1) to engage stimulus components that were relevant to a perceptional task and to discard influences from components that were irrelevant to the task. They showed that V1 neurons exhibited characteristic tuning patterns depending on the array of stimulus components. Ramalingam et al. [2] further examined dynamic aspects of V1 neurons in the tasks used by Li et al., and revealed the difference in the dynamic correlations between V1 responses evoked by the two tasks. Using a V1 model, we also proposed the neural mechanism of the tuning modulations by top-down signal [3]. Top-down and bottom-up information are processed with different brain rhythms. Fast oscillations such as gamma rhythms are involved in sensory coding and feature binding in local circuits, while slower oscillations such as alpha and beta rhythms are evoked in higher brain areas and may contribute to the coupling of distinct brain areas. In this study, we investigate how information of top-down influence is conveyed by feedback pathway, and how information relevant to task context is coordinated by different brain oscillations. We present a model of visual system which consists of networks of V1 and V2. We consider the two types of perceptual tasks used by Li et al., bisection task and vernier one. We show that visual information relevant to each task context is coordinated by a push-pull effect of top-down signal. We also show that top-down signal reflecting a beta oscillation in V2 neurons, coupled with a gamma oscillation of V1 neurons, enable the efficient gating of task-relevant information in V1. This study provides a useful insight to understanding how rhythmic oscillations in distinct brain areas are coupled to gate task-relevant information encoded in early sensory areas. References1. Li W, Piech V, and Gilbert C D. Perceptual learning and top-down influences in primary visual cortex. Nat Neurosci. 2004, 7(6), 651-6572. Ramalingam N, McManus J N J, Li W, and Gilbert C D. Top-Down Modulation of Lateral Interactions in Visual Cortex. J Neurosci. 2013, 33(5), 1773-17893. Kamiyama A, Fujita K, and Kashimori Y, A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex. BioSystems. 2016, 150, 138-148

Speakers
RT

Ryo Tani

The University of Electro-Communications



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 14

7:00pm CEST

P95: A functional role of short-term synapses in maintenance of gustatory working memory in orbitofrontal cortex
Layla Antaket, Yoshiki Kashimori

Taste perception is an important function for life activities, such as ingestion of nutrition and escape of toxic foods. Gustatory information is first processed by taste receptors in the taste buds present in the tongue. After that, it is transmitted to the orbitofrontal cortex (OFC), the hypothalamus, and the amygdala. In the course of a series of information processing processes, GC processes information on the quality and strength (concentration) of taste itself. Currently, taste research is proceeding with electrophysiological and molecular biological research on receptors. However, the processing mechanism of taste information encoded in each part of the taste transmission pathway is not well understood.

Furthermore, in addition to the higher-order processing of taste information, the OFC, located above the GC, integrates taste information and other sensory information such as tactile sensation, smell, and color to determine the flavor (flavor) of food and guide behavior. We proposed a binding mechanism of taste and odor information in the OFC [1].A recent study has shown an alternative function of OFC, or working memory function of taste information [2]. The study showed that OFC neurons of the rhesus monkeys encoded a gustatory working memory in a delayed match-to-sample task. OFC neurons exhibited a persistent activity even when a gustatory stimulus presented in the sample period was turned off, whereas neurons of the primary gustatory cortex (GC) did not show a significant persistency of the activity. It is unclear how the gustatory working memory in the OFC is shaped by the interaction between the GC and the OFC.

To address this issue, we focus on a delayed match-to-sample task, in which monkeys have to decide whether the first juice stimulus is the same as the second stimulus separated by a delay period. We develop a model of gustatory system that consists of network models of GC and OFC. Each model of GC and OFC has two-dimensional array of neurons, which encode information of three kinds of foods, orange, guava, and tomato. These network models were based on the Izhikevich neuron model[3] and biophysical synapses mediated by neurotransmitters such as AMPA, NMDA, and GABA. The neural unit consists of a main neuron and an inhibitory interneuron, mutually connected with AMPA and GABA synapses. Main neurons are reciprocally connected with AMPA and NMDA synapses. The NMDA-synaptic connections between these networks are formed by Hebbian learning in a task-relevant way. The gustatory information of three foods is represented by dynamical attractors in the GC and OFC networks. Simulating our model for match/nonmatch trails, we explored the neural mechanism by which the working memory of gustatory information is generated in the OFC. We show that the working memory of gustatory information is shaped by the recurrent activation mediated by short-term synapses of OFC neurons. In addition, we examined how working memory formed by the OFC is used for match/nonmatch decision-making by adding a decision layer to the model.

Reference

1. Shimemura T, Fujita K, Kashimori Y. A neural mechanism of taste perception modulated by odor information. Chemical Senses _. _2016 _, _41(7), 579-589. 2. Lara AH, Kennerley SW, Wallis JD. Encoding of Gustatory Working Memory by Orbitofrontal Neurons. J Neurosci. 2009, 29(3), 765-774. 3. Izhikevich EM. Simple Model of Spiking Neurons. IEEE Trans Neural Net. _2003,_ 14(6) 1569-1572.

Speakers
LA

Layla Antaket

University of Electro-Communications



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 15

7:00pm CEST

P98: Bayesian mechanics in the brain under the free-energy principle
Chang Sub Kim
To join the video meeting, click this link: https://meet.google.com/wtq-nrpp-emw

In the field of neurosciences, the free-energy principle (FEP) stipulates that all viable organisms cognize and behave using probabilistic models embodied in their brain in a manner that ensures their adaptive fitness in the environment [1]. Here, we report on our recent theoretical study that supports the use of the FEP as a more physically plausible theory, based on the principle of least action [2].

We recapitulate the FEP carefully [3] and evaluate that some technical facets in its conventional formalism require reformulation with finesse [4]. Accordingly, we articulate the FEP as living organisms minimize the sensory uncertainty, which is the average surprisal over a temporal horizon, and reformulate the recognition dynamics of the brain's ability for actively inferring the external causes of sensory inputs. We effectively cast the Bayesian inversion problem in the organism's brain to find the optimal neural trajectories by minimizing the time integral of the informational free energy (IFE), which is the upper bound of the long-term average surprisal. Specifically, we abstain from i) the non-Newtonian extension of continuous states, which yields the generalized motion, by recursively taking higher- order derivatives of the sensory observation and state equations, and ii) the heuristic gradient-descent minimization of the IFE in a moving frame of reference in a generalized-state space by viewing the nonequilibrium dynamics of brain states as drift-diffusion flows that locally conserve the probability density.

The advantage of our formulation is that only bare variables (positions) and their first-order derivatives (velocities) are used in the Bayesian neural computation, thereby dismissing the need for the extra-physical assumptions. Bare variables are an organism's representations of the causal environment, and their conjugate momenta resemble the precision-weighted prediction errors in a predictive coding language [5]. Furthermore, we consider the sensory-data-generating dynamics to be nonstationary on an equal footing with intra- and inter-hierarchical-level dynamics in a neuronally based biophysical model. Consequently, our theory delivers a natural account of the descending predictions and ascending prediction errors in the brain's hierarchical message-passing structure (Fig. 1). The ensuing neural circuitry may be related to the alpha-beta and gamma rhythms that characterize the feedback and feed-forward influences, respectively, in the primate visual cortex [6].

References
[1] Friston K. The free-energy principle: a unified brain theory?. Nature Reviews Neuroscience. 2010, 11, 127--138.
[2] Landau L P, Lifshitz E M. Classical Mechanics. 3rd edition. Amsterdam: Elsevier Ltd.; 1976.
[3] Buckley C L & Kim C S, McGregor S, Seth A K. The free energy principle for action and perception: A mathematical review. Journal of Mathematical Psychology. 2017, 81, 55--79; 
[4] Kim C S. Recognition dynamics in the brain under the free energy principle. Neural Computation. 2018, 30, 2616-2659; .
[5] Rao R P N, Ballard D H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience. 1999, 2(1), 79--87.
[6] Michalareas G, Vezoli J, van Pelt S, et al. Alpha-beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas. Neuron. 2016, 89, 384-397.

Speakers
avatar for Chang-Sub Kim

Chang-Sub Kim

Professor, Department of Physics, Chonnam National University



Sunday July 19, 2020 7:00pm - 8:00pm CEST
Slot 02

8:00pm CEST

P101: Representing predictability of sequence patterns in a random network with short-term plasticity
Vincent S.C. Chien, Richard Gast, Burkhard Maess, Thomas Knösche
Poster presentation link: https://meet.google.com/ukt-zxcs-jsg

The poster will be presented by Vincent Chien. To facilitate discussion, please have a look at the 10-min video (https://youtu.be/Nxf-OQmrdpE) that guides you through our poster.
Thanks!

The brain is capable of recognizing repetitive acoustic patterns within a few repetitions, which is essential for the timely identification of sound objects and the prediction of upcoming sounds. Several studies found neural correlates regarding the predictability of sequence patterns, but the underlying neural mechanism is not yet clear. To investigate the mechanism supporting the fast emergence of the predictive state, we use neural mass modeling to replicate the experimental observations during the sequential repetition [1]. First, we investigated the effect of short-term plasticity (STP) to the response of a Wilson-Cowan node to a prolonged stimulus, where the node consists of an excitatory (E) and an inhibitory (I) population. In total, 27 combinations of plasticity settings were examined, where the plasticity types include short- term depression (STD), short-term facilitation (STF), and no STP, and the connection types include E-to-E, E-to-I, I-to-E, and I-to-I connections. The simulated signals that best explain the observed MEG temporal profiles (i.e., an onset peak followed by a rising curve) rely on the setting where STD is applied on E-to-E connection and STF applied on E-to-I connection. Second, with the preferred plasticity settings (i.e., STD on E-to-E and STP on E-to-I), we simulated the dynamics of a random network in response to regular (REG) and random (RAND) sequences in PyRates [2]. The simulated signals can reproduce several experimental observations, including the above-mentioned MEG temporal profiles, the predictability-dependent MEG amplitude (i.e., dependency in terms of regularity and alphabet size of the input sequence), as well as the MEG responses in the switch conditions (i.e., from REG to RAND, and from RAND to REG). Third, we used a simplified two-level network to illustrate the main mechanisms supporting such representation of predictability during the sequential repetition. The simplified network consists of nodes that are selective to sound tone (level 1) and nodes that are selective to tone direction (level 2). The simulation reveals higher firing rates of I populations level-2 nodes during REG than RAND condition, which contributes to stronger simulated MEG amplitude via I-to-E connections (Fig 1). In conclusion, we provide a possible mechanism to account for the experimental observations. First, the increased MEG amplitude is mainly due to increased inhibitory activities. Second, the effect of alphabet size is due to two forms of STP (i.e., STD on E-to-E and STF on E-to-I). Third, the effect of regularity relies on the inclusion of the 2nd-level nodes that sparsely encodes the repetitive patterns. In short, the more predictable sequence patterns cause a stronger accumulation of inhibitory activities in direction- selective areas via STP, which in turn leads to a higher MEG amplitude. This mechanism emphasizes the need for STP at each stage of the bottom-up process, whereas the involvement of top-down processes is not necessary.

[1] Barascud, Nicolas, et al. "Brain responses in humans reveal ideal observer-like sensitivity to complex acoustic patterns." Proceedings of the National Academy of Sciences 113.5 (2016): E616-E625.

[2] Gast, Richard, et al. "PyRates—A Python framework for rate-based neural simulations." PloS one 14.12 (2019).

Speakers
avatar for Thomas R. Knösche

Thomas R. Knösche

Brain Networks, Max Planck Institute for Human Cognitive and Brain Sciences



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 03

8:00pm CEST

P102: Ephaptic coupling in white matter fibre bundles modulates axonal transmission delays
Helmut Schmidt, Gerald Hahn, Gustavo Deco, Thomas Knösche
Poster presentation link: https://meet.google.com/xsy-yfrh-gsn

The poster will be presented by Helmut Schmidt.

Axonal connections are widely regarded as faithful transmitters of neuronal signals with fixed delays. The reasoning behind this is that local field potentials caused by spikes travelling along axons are too small to have an effect on other axons. We demonstrate that, although the local field potentials generated by single spikes are of the order of microvolts, the collective local field potential generated by spike volleys can reach several millivolts. As a consequence, the resulting depolarisation of the axonal membranes (i.e. ephaptic coupling) increases the velocity of spikes, and therefore reduces axonal transmission delays between brain areas.

We first compute the local field potential (LFP) using the line approximation [1,2] for a spike in a single axon. We find that it generates an LFP with about 20 microvolts amplitude, which is too weak to have a significant effect on neighbouring axons (Figure A). Next, we extend this formalism to fibre bundles to compute the LFP generated by spike volleys, with different levels of synchrony. Such spike volleys can generate LFPs with amplitudes of several millivolts (Figure B), and the amplitude of the LFP depends strongly on the level of synchrony of the spike volley. Finally, we devise a spike propagation model in which the LFPs generated by spikes modulate their propagation velocity. This model reveals that with increasing number of spikes in a spike volley, the axonal transmission delays decrease (Figure C).

To the best of our knowledge, this study is the first that investigates the effect of LFPs on axonal signal transmission in macroscopic fibre bundles. The main result is that axonal transmission delays decrease if spike volleys are sufficiently large and synchronous. This is in contrast to studies investigating ephaptic coupling between spikes at the microscopic level (e.g. [3]), which have used a different model setup that resulted in increasing axonal transmission delays. Our results are a possible explanation for the decreasing stimulus latency with increasing stimulus intensity observed in many psychological experiments (e.g. [4]). We speculate that the modulation of axonal transmission delays contributes to the flexible synchronisation of high frequency oscillations (e.g. gamma oscillations). AcknowledgementsThis work has been support by the German Research Foundation (DFG), SPP2041. References1. Holt GR, Koch C. Electrical interactions via the extracellular potential near cell bodies. J Comp Neurosc 1999, 6:169-184.2. McColgan T, Liu J, Kuokkanen PT, Carr CE, Wagner H, Kempter R. Dipolar extracellular potentials generated by axonal projections. eLife 2017, 6:e25106. 3\. Binczak S, Eilbeck JC, Scott AC. Ephaptic coupling of myelinated nerve fibres. Physica D 2001, 148:159-1744. Ulrich R, Rinkenauer G, Miller J. Effects of stimulus duration and intensity on simple reaction time and response force. J Exp Psychol 1998, 24:915-928.

Speakers
avatar for Thomas R. Knösche

Thomas R. Knösche

Brain Networks, Max Planck Institute for Human Cognitive and Brain Sciences



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 04

8:00pm CEST

P106: Efficient communication in distributed simulations of spiking neuronal networks with gap junctions
Jakob Jordan, Moritz Helias, Markus Diesmann, Susanne Kunkel

Investigating the dynamics and function of large-scale spiking neuronal networks with realistic numbers of synapses is made possible today by state- of-the-art simulation code that scales to the largest contemporary supercomputers. These implementations exploit the delayed and point-event like nature of the spike interaction between neurons. In a network with only chemical synapses the dynamics of all neurons is decoupled for the duration of the minimal synaptic transmission delay such that the dynamics of each neuron can be propagated independently for the duration of the minimal delay without requiring information from other neurons. Hence, in distributed simulations of such networks, compute nodes need to communicate spike data only after this period [1].

Electrical interactions, also called gap junctions at first seem to be incompatible with such a communication scheme as they couple membrane potentials of pairs of neurons instantaneously. Hahne et al. [2] however demonstrate that communication of spikes and gap-junction data can be unified using waveform-relaxation methods [3]. Despite these advances, simulations involving gap junctions scale only poorly due to a communication scheme that collects global data on each compute node. In comparison to chemical synapses, gap junctions are far less abundant. To improve scalability we exploit this sparsity by integrating the existing framework for continuous interactions with a recently proposed directed communication scheme for spikes [4]. Using a reference implementation in the NEST simulator ([www.nest- simulator.org](http://www.nest-simulator.org), [5]) we demonstrate excellent scalability of the integrated framework, accelerating large-scale simulations with gap junctions by more than an order of magnitude. This allows, for the first time, the efficient exploration of the interactions of chemical and electrical coupling in large-scale neuronal networks models with natural synapse density distributed across thousands of compute nodes.

Acknowledgements Partly supported by Helmholtz young investigator group VH-NG-1028, European Union’s Horizon 2020 funding framework under grant agreement no. 785907 (Human Brain Project HBP SGA2) and no. 754304 (DEEP-EST), Helmholtz IVF no. SO-092 (Advanced Computing Architectures, ACA). Use of the JURECA supercomputer through VSR grant JINB33.

References

[1] Morrison A, Mehring C, Geisel T, Aertsen A, Diesmann M (2005) Neural Comput 17:1776–1801
[2] Hahne J, Helias M, Kunkel S, Igarashi J, Bolten M, Frommer A, Diesmann M (2015) Front Neuroinform 9:22
[3] Lelarasmee E, Ruehli AE, Sangiovanni-Vincentelli A (1982) IEEE Trans CAD Integ Circ Syst 1:131–145
[4] Jordan J, Ippen T, Helias M, Kitayama I, Sato M, Igarashi J, Diesmann M, Kunkel S (2018) Front Neuroinform 12:2
[5] Gewaltig MO, Diesmann M (2007) Scholarpedia 2:1430

Speakers
avatar for Markus Diesmann

Markus Diesmann

Professor, Institute of Neuroscience and Medicine (INM-6, INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Ce
My main scientific interests include correlation structure of neuronal networks, models of cortical networks, simulation technology, supercomputing and neuromorphic computing. I am co-founder of the NEST Initiative and a member of their steering committee.


Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 05

8:00pm CEST

P108: Ensemble empirical mode decomposition of noisy local filed potentials from optogenetic data
Sorinel Oprisan, Xandre Clementsmith, Tamas Tompa, Antonieta Lavin

Zoom meeting link:  https://cofc.zoom.us/j/4096287484
P108 from 2:00-2:30 PM
P16 from 2:30-3:00PM

Gamma rhythms with frequencies over 30 Hz are thought to reflect cortical information processing. However, signals associated with gamma rhythms are notoriously difficult to record due to their low energy and relatively short duration of the order of a few seconds. In our experiments, of particular interest was the 40 Hz synchronization of neurons, which is believed to be indicative of temporal binding. Temporal binding glues together spatially distributed representations of different features of sensory input, e.g., during the analysis of a visual stimulus, to produce a coherent description of constituent elements.

Our goal was to investigate the effect of systemic cocaine injection on the local field potentials (LFPs) recorded from the medial prefrontal cortex (mPFC) of mice. We used male PV-Cre mice infected with a viral vector [1] that makes some proteins sensitive to light, such as the members of the opsin family, including retinal pigments in visual systems. By genetic engineering, channelrhodopsins was coupled to sodium channels express in neurons and increase their excitability when exposed to blue light [1].

We previously used nonlinear dynamics tools, such as delay embedding and false nearest neighbors [2,3], to estimate the embedding dimension and the delay time for attractor reconstruction from LFPs [4]. While nonlinear dynamics is a powerful tool for data analysis, recent developments suggested that ensemble empirical mode decomposition (EEMD) could be better suited for short and noisy time series. The traditional EMD method is a data-driven decomposition of the original data into orthogonal Intrinsic Mode Functions (IMFs) [5]. In the presence of noise, the time scale separation is not perfect, and IMF mixing produces significant energy leaks between modes. The advantage of EEMD is that by adding a controlled amount of noise to the data leads to a between demixing of the IMFs. We also performed a Hilbert-Huang [5] transform to the demixed IMFs and computed the instantaneous frequency spectrum. Our results indicate that cocaine significantly shifts the energy distribution towards earlier durations during the trial compared to control. Our findings allow us to estimate the contribution of different spectral components quantitatively and develop a dynamical model of the data.

References

[1] Dilgen JE, Tompa T, Saggu S, Naselaris T, and Lavin A, Optogenetically evoked gamma oscillations are disturbed by cocaine administration, Frontiers in Cellular Neuroscience. 2013, 7:213.

[2] Oprisan SA, Lynn PE, Tompa T, and Lavin A, Low-dimensional attractor for neural activity from local field potentials in optogenetic mice. Frontiers in Computational Neuroscience. 2015, 9:125.

[3] Oprisan SA, Imperatore J, Helms J, Tompa T, and Lavin A, Cocaine-induced changes in low-dimensional attractors of local field potentials in optogenetic mice, Frontiers in Computational Neuroscience. 2018, 12:1.

[4] Oprisan SA, Clementsmith X, Tompa T, Lavin A. Dopamine receptor antagonists effects on low-dimensional attractors of local field potentials in optogenetic mice. PLoS ONE. 2019, 14(10): e0223469.

[5] Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen NC, Tung CC, Liu H H, The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Nonstationary Time Series Analysis, Proceedings of the Royal Society of London A. 1998, 454: 903-995.

Speakers
avatar for Sorinel Oprisan

Sorinel Oprisan

Professor, Department of Physics and Astronomy, College of Charleston



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 17

8:00pm CEST

P10: Using Adaptive Exponential Integrate-and-Fire neurons to study general principles of patho-topology of cerebellar networks
Maurizio De Pitta, Jose Angel Ornella Rodrigues, Juan Sustacha, Giulio Bonifazi, Alicia Nieto-Reyes, Sivan Kanner, Miri Goldin, Ari Barzilai, Paolo Bonifazi

(A-T) is an example of a systemic genetic disease impacting the cerebellar circuit’s structure and function. Kanner et al. [1] have shown how the A-T phenotype in mice correlates with severe glial atrophy and increased synaptic markers, resulting in altered cerebellar networks’ dynamics. In particular, experiments in cerebellar cultures showed a disruption of networks’ synchronizations, which were recovered by replacements of A-T glial cells with healthy ones. Notably, the only presence of healthy astrocyte was sufficient to restore the physiological synaptic puncta level between mutated neurons. In the intact cerebellar circuits, glial morphological alterations and an increase in inhibitory synaptic connectivity markers were first reported and correlated (preliminary unpublished results) with an increase in the complex spiking of the Purkinje cells (PCs). In order to understand and model these structural-functional circuits’ alterations, we developed a simplified model of the cerebellar circuit. To this aim, we adopt the adaptive Exponential Integrate-and-Fire (aEIF) neuron model in different parameter configurations, to capture essential functional features of four different cell types: granule cells and excitatory neurons of the inferior olive (IONs); Purkinje cells and inhibitory neurons of the Deep Cerebellar Nuclei (DPNNs). Next, we explore different degrees of connectivity and synaptic weights, the dynamics of the simplified cerebellar circuitry. Our simulations suggest that the concomitant increased number of inhibitory connections from PC to DPNNs, and from DPNNs to IONs, ultimately results in a disinhibited IONs dynamics. As a consequence, IONs provide a higher rate of excitation to PCs within the cerebellar loop, which finally leads to higher complex spiking frequency in PCs. These results provide new insights into the dysfunctional A-T cerebellar dynamics and open a new perspective for targeted pharmacological treatments.

Acknowledgements

We thank the 'Junior Leader' Fellowship Program by 'la Caixa' Banking Foundation (Grant LCF/BQ/LI18/11630006).

References

1. Kanner S, Goldin M, Galron R, et al. Astrocytes restore connectivity and synchronization in dysfunctional cerebellar networks. Proc. Natl. Acad. Sci. USA. 2018,

Speakers
avatar for Maurizio De Pitta

Maurizio De Pitta

Research Fellow, Basque Center for Applied Mathematics
I am part of the Group in Mathematical, Computational, and Experimental Neuroscience at the Basque Center for Applied Mathematics in Bilbao (Spain). My expertise is in the study of neuron-glial interactions in the healthy and diseased brain. I use multi-disciplinary approaches at the cross-roads of Physics and Computer Science, and collaborate with biologists, engineers, and medical doctors, to harness the... Read More →


Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 19

8:00pm CEST

P113: Dynamically Damped Stochastic Alpha-band Relaxation Activity in 1/f Noise and Alpha Blocking in Resting M/EEG
Rick Evertz, Damien Hicks, David Liley

Dynamical and physiological basis of alpha band activity and 1/f noise subject of continued speculation. Here we conjecture, on the basis of empirical data analysis, that both of these features can be dynamically unified if resting EEG is conceived of being the sum of multiple stochastically perturbed alpha band oscillatory relaxation processes. The modulation of alpha-band and 1/f noise activity by dynamic damping is explored in eyes closed (EC) and eyes open (EO) resting state Magneto/Electroencephalography (M/EEG). We assume that the resting M/EEG being recorded is composed of a superposition of stochastically perturbed alpha-band relaxation processes with a distribution of dampings, the functional form of which is unknown. We perform the inverse problem and take measured M/EEG power spectra and compute the distribution of dampings using Tikhonov regularization methods. The characteristics of the damping distribution are examined across subjects, sensors and recording condition (EC/EO).

We find that there is robust changes in the estimated damping distribution between EC/EO recording conditions across participants. Our findings suggest that alpha-blocking and the 1/f noise structure are both explicable through a singular process of dynamically damped alpha-band activity. The estimated damping distributions are typically found to be bimodal or trimodal (Fig. 1). The number and position of the modes is related to the sharpness of the alpha resonance (amplitude, FWHM) and the slope of the power spectrum. The results suggest that there exists an intimate relationship between resting state alpha activity and 1/f noise with changes in both governed by changes to the damping of the underlying alpha relaxation processes. In particular, alpha-blocking is observed to be the result of the most weakly damped distribution mode (peak at 0.4 - 0.6s^-1) becoming more heavily damped (peak at 1.0 - 1.5s^-1). Reductions in the slope of the 1/f noise are the result of the alpha relaxation processes becoming more broadly distributed in their respective dampings with more weighting towards heavily damped alpha activity. The results suggest a novel way of characterizing resting M/EEG power spectra and provides new insight into the central role that damped alpha-band activity may play in the interesting spatio-temporal features of resting state M/EEG.

Future work will explore the more complex case where we expect a distribution over both frequency and damping for the stochastic relaxation processes, elucidating any frequency dependent damping effects between conditions. The inverse problem can be solved via gradient descent methods where we estimate the 2-dimensional probability density function over frequency and damping from a given power spectrum.

Speakers
RE

Rick Evertz

PhD Candidate, Swinburne University of Technology
Hi Folks,Currently in heavy lock down in Melbourne, Australia. I will likely be away for the laptop during my poster (4am local time). If you have any questions about my research please email them to revertz@swin.edu.au as I will be happy to discuss it in depth. Hope you are all keeping... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 15

8:00pm CEST

P119: Relating transfer entropy to network structure and motifs, and implications for brain network inference
https://uni-sydney.zoom.us/j/92409687585

Zoom meeting ID: 92409687585

Leonardo Novelli, Joseph Lizier

 Transfer entropy is an established method for the analysis of directed relationships in neuroimaging data. In its original formulation, transfer entropy is a bivariate measure, i.e., a measure between a pair of elements or nodes [1]. However, when two nodes are embedded in a network, the strength of their direct coupling is not sufficient to fully characterize the transfer entropy between them. This is because transfer entropy results from network effects due to interactions between all the nodes.

In this theoretical work, we study the bivariate transfer entropy as a function of network structure, when the link weights are known. In particular, we use a discrete-time linear Gaussian model to investigate the contribution of small motifs, i.e., small subnetwork configurations comprising two to four nodes. Although the linear model is simplistic, it is widely used and has the advantage of being analytically tractable. Moreover, using this model means that our results extend to Granger causality, which is equivalent to transfer entropy for Gaussian variables.

We show analytically that the dependence of transfer entropy on the direct link weight is only a first approximation, valid for weak coupling. More generally, the transfer entropy increases with the in-degree of the source and decreases with the in-degree of the target, which suggests an asymmetry of information transfer between hubs and peripheral nodes.

Importantly, these results also have implications for directed functional network inference from time series, which is one of the main applications of transfer entropy in neuroscience. The asymmetry of information transfer suggests that links from hubs to peripheral nodes would generally be easier to infer than links between hubs, as well as links from peripheral nodes to hubs. This could bias the estimation of network properties such as the degree distribution and the rich-club coefficient.

In addition to the dependence on the in-degree, the transfer entropy is directly proportional to the weighted motifs involving common parents or multiple walks from the source to the target (Fig. 1). These motifs are more abundant in clustered or modular networks than in random networks, suggesting a higher transfer in the former case. Further, if the network has only positive edge weights, we have a positive correlation to the number of such motifs. This applies in the mammalian cortex (on average, since the majority of connections are thought to be excitatory) – implying that directed functional network inference with transfer entropy is better able to infer links within brain modules (where such motifs enhance transfer entropy values) in comparison to links across modules.

References:

1. Schreiber, T. Measuring Information Transfer. Physical Review Letters. 2000, vol. 85, no. 2, pp. 461–464 2. Novelli, L., Wollstadt, P., Mediano, P., Wibral, M., & Lizier, J. T. Large-scale directed network inference with multivariate transfer entropy and hierarchical statistical testing. Network Neuroscience. 2019, vol. 3, no. 3, pp. 827–847

Speakers
avatar for Leonardo Novelli

Leonardo Novelli

PhD Student, Centre for Complex Systems, The University of Sydney



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 10

8:00pm CEST

P125: A predictive coding model of transitive inference
https://meet.google.com/ssy-gevx-whi

Moritz Moeller
, Rafal Bogacz, Sanjay Manohar

Transitive inference--deducing that "A is better than C" from the premises "A is better than B" and "B is better than C"--is a basic form of deductive reasoning; both humans and animals are capable of it. However, the mechanism that enables transitive inference is not understood. Partly, this is due to the absence of a concrete, falsifiable formulation of the so-called cognitive explanation of transitive inference (which suggests that subjects combine the facts they observe into a mental model, which they then use for reasoning). In this work, we use the predictive coding method to derive a precise, mathematical implementation of the cognitive explanation of transitive inference (Fig. 1A shows a schematic representation of the model we use). We test our model by simulating a set of typical transitive inference experiments and show that it reproduces several phenomena observed in animal experiments. For example, our model reproduces the gradual acquisition of premise pairs (A > B, B > C) and the simultaneously emerging capability for transitive inference (A>C) (Fig. 1B). We expect this work to lead to novel testable predictions that will inspire future experiments and help to uncover the mechanism behind transitive inference. Further, our work adds support to predictive coding as a universal organising principle of brain function.

 If problems arise, email: moritz.moeller@stx.ox.ac.uk




Speakers
avatar for Moritz Moeller

Moritz Moeller

PhD student, NDCN, University of Oxford



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 07

8:00pm CEST

P132: Preventing Retinal Ganglion Cell Axon Bundle Activation with Oriented Rectangular Electrodes
Wei Tong, Michael R Ibbotson, Hamish Meffin

Retinal prostheses can restore visual sensations in people that have lost their photoreceptors by electrically stimulating surviving retinal ganglion cells (RGCs). Currently, there are mainly three types of retinal prostheses under development, based on their implantation locations: epi-retinal, sub- retinal and suprachoroidal [1]. Clinical studies from all three types of devices indicate that, although a sense of vision can be restored, the visual acuity obtained is limited and functional vision, such as navigation and facial recognition remains challenging. One major difficulty is associated with the low spatial resolution obtained from electrical stimulation, i.e. the large spread of activation amongst RGCs leads to blurred or distorted visual percepts. Particularly, with epi-retinal implants, experiments have revealed that the leading cause of widespread activation is the unintended activation of passing RGC axons, which lead to elongated phosphines in patients [2].

This work proposes to use rectangular electrodes oriented parallel to the axon bundles to prevent the activation of passing axon bundles. Here, we first used simulation to investigate the interaction of neural tissue orientation and stimulation electrode configuration on the RGC activation patterns. A four-layer computational model of epiretinal extracellular stimulation that captures the effect of neurite orientation in anisotropic tissue was applied, as previously described [3], using a volume conductor model known as the cellular composite model. As shown in Figure 1a, our model shows that stimulating with rectangular electrode aligned with the nerve fiber layer (i.e. passing axon bundles), can be used to achieve selective activation of axon initial segments, rather than passing fibers.

The simulation results were then confirmed with experiments. Here, data were acquired from adult Long Evan rats and by recording the response of RGCs from whole-mount retina preparations using calcium imaging. Electrical stimulation was delivered through a diamond coated carbon fiber electrode with a length of 200 µm and diameter of 10 µm. The electrode was placed either parallel or perpendicular to the RGC axon bundles. Biphasic stimuli with different pulse durations of 33-500 µs were tested. Our experimental observations (Figure 1b) are consistent with the expectations of the simulations, and the use of rectangular electrodes placed parallel to axon bundles can significantly reduce the activation of RGC axon bundles. When using biphasic stimulation as short as 33 µs, the activated RGCs were mostly confined to the region below or very close-to the electrode, as observed using confocal microscopy.

To conclude, this work provides a stimulation strategy for reducing the spread of RGC activation for epi-retinal prostheses. Using ultrashort pulses together with rectangular electrodes parallel to the RGC axon bundles, the performance of epi-retinal prostheses will be improved significantly, thus promising to restore a higher quality of vision to the blind.

References

[1] J. D. Weiland et al, _Annual Review of Vision Science, Vol 2,_ vol. 2, pp. 273-294, 2016.

[2] D. Nanduri et al, _Investigative ophthalmology & visual science,_ vol. 53, no. 1, pp. 205-214, 2012.

[3] T. B. Esler, et al, _PloS one,_ vol. 13, no. 3, pp. e0193598, 2018.

Speakers
avatar for Wei Tong

Wei Tong

Research Fellow, National Vision Research Institute, Melbourne
I received my BS in applied physics from University of Science and Technology of China in 2012, and my PhD in physics from the University of Melbourne in 2017. Since 2017, I have been working as a research fellow at the National Vision Research Institute of Australia. My research... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 14

8:00pm CEST

P135: A cortical model examining mismatch negativity deficits in schizophrenia

Join Zoom Meeting here

Meeting ID: 845 3747 7335

Password: CNS2020

___________________________________________________

Gili Karni
, Christoph Metzner

Recent advances in computational modeling, genome-wide association studies, neuroimaging, and theoretical neuroscience pose better opportunities to study neuropsychiatric disorders, such as schizophrenia (SZC)[4]. However, despite a repeated examination of its well-characterized phenotypes, our understanding of SZC's neurophysiological biomarkers or cortical dynamics remain elusive.

This study presents a biophysical spiking neuron model of perceptual inference, based on the predictive coding framework [1]. The model, implemented in NetPyNE [6], incorporates various single-cell models of both excitatory and inhibitory neurons [2,8], mimicking the circuits of the primary auditory cortex. This model allows for the exploration of the effects bio- genetic variants (expressed via ion-channels or synaptic mechanism alterations, see[5]) have on auditory mismatch negativity (MMN) deficits, a common biomarker for SZC [3]. More particularly, the model distinguishes between repetition suppression and prediction error and examines their respective contribution to the MMN. The first part of this report establishes the model's explanatory power using two well-known paradigms: the oddball paradigm and the cascade paradigm. Both can reproduce the electrophysiological measures of the MMN among healthy subjects. Later, via tuning the parameters of single-neuron equations or the network's synaptic weights, the model exhibits the expected LFP changes associated with SZC [7].

Therefore, this model enables exploring how biogenetic alterations affect the underlying components of the observed MMN deficits. Novel, yet preliminary, predictions are presented and suggested future steps for validations are listed. This model could support studies exploring genetic effects on the MMN (or other aspects of predictive coding) in the auditory cortex.


References 
[1]Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., & Friston, K. J. (2012). Canonical microcircuits for predictive coding. Neuron, 76(4), 695-711.
[2]Beeman, D. (2018). Comparison with human layer 2/3 pyramidal cell dendritic morphologies. Poster session presented at the meeting of Society for Neuroscience, San Diego.
[3]Garrido, M. I., Kilner, J. M., Stephan, K. E., & Friston, K. J. (2009). The mismatch negativity: a review of underlying mechanisms. Clinical neurophysiology, 120(3), 453-463.
[4]Krystal, John H., et al. "Computational psychiatry and the challenge of schizophrenia." (2017): 473-475.
[5]Mäki-Marttunen, T., Halnes, G., Devor, A., et al. (2016). Functional effects of schizophrenia-linked genetic variants on intrinsic single-neuron excitability: a modeling study. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 1(1), 49-59.
[6]Dura-Bernal S, Suter B, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, ... &; McDougal R.
NetPyNE: a tool for data-driven multiscale modeling of brain circuits. bioRxiv 2018, 461137.
[7]Michie, P. T., Malmierca, M. S., Harms, L., & Todd, J. (2016). The neurobiology of MMN and implications for schizophrenia. Biological psychology, 116, 90-97.
[8]Vierling-Claassen, D., Cardin, J., Moore, C. I., & Jones, S. R. (2010). Computational modeling of distinct neocortical oscillations driven by cell- type selective optogenetic drive: separable resonant circuits controlled by low-threshold spiking and fast-spiking interneurons. Frontiers in human neuroscience, 4, 198.

Speakers
avatar for Gili Karni

Gili Karni

Minerva schools at KGI
I am curious about the emergence of human intelligence. Specifically, I am fascinated by the flexibility, speed, and resilience of human learning and how it informs decision making. I hope to explore the algorithms underlying these processes using computational methods.Besides research... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 06

8:00pm CEST

P142: Using reinforcement learning to train biophysically detailed models of visual-motor cortex to play Atari games
Haroon Anwar is inviting you to a scheduled Zoom meeting.

Topic: Haroon Anwar's Personal Meeting Room

Join Zoom Meeting
https://us04web.zoom.us/j/2664851890?pwd=M0lhekQvc1BZMFpGYWVIajZNQTdnZz09

Meeting ID: 266 485 1890
Passcode: 6H7hTB


  Haroon Anwar, Salvador Dura-Bernal, Cliff C. Kerr, George L. Chadderdon, William W Lytton, Peter Lakatos, Samuel A. Neymotin
Computational neuroscientists build biophysically detailed models of neurons and neural circuits primarily to understand the origin of dynamics observed in experimental data. Much of these efforts are dedicated to match ensemble activity of the neurons in the modeled brain region while often ignoring multimodal information flow across brain regions and associated behaviors. Although these efforts have led us to improved mechanistic understanding of electrophysiological behavior of diverse types of neurons and neural networks, these approaches fall short of linking detailed models with associated behaviors in a closed-loop setting. In this study, we bridged that gap by developing biophysically detailed multimodal models of brain regions involved in processing visual information, generating motor behaviors and making associations between visual and motor neural representations by deploying reward-based learning mechanisms. We build a simple model of visual cortex receiving topological inputs from the interfaced Atari-game 'pong' environment (provided by the OpenAI’s Gym). This modeled region processed, integrated and relayed visual information about the game environment across the hierarchy of higher order visual areas (V1/V2->V4->IT). As we moved from V1 to IT, the number of neurons in each area decreased whereas the synaptic connections increased. This feature was included in the model to reflect the anatomical convergence suggested in the literature and to have a broader tuning for input features in progression up the visual cortical hierarchy. We used compartmental models of both excitatory and inhibitory neurons interconnected via AMPA (for excitation) or GABA (for inhibition) synapses. The strengths of synaptic connections were adjusted so that the information was reliably transmitted across visual areas. In our motor cortex model, neurons associated with a particular motor action were grouped together and received inputs from all visual areas. For the game Pong, we used two populations of motor neurons, for generating “up” and “down” move commands. All the synapses between visual and motor cortex were plastic, so that the connection strengths could be increased or decreased via reinforcement learning. When an action was generated in the model of motor cortex driven by visual representation of the environment in the model of visual cortex, that action generated a move in the game, which in turn updated the environment and triggered a response to the action: reward (+1), punishment (-1) or no-response (0). These signals drove the reinforcement learning at the synapses between visual cortex and motor cortex by strengthening or weakening them so that the model could learn which actions were rewarding in a given environment. Here we present an exploratory analysis as a proof-of-concept for using biophysically detailed modeling of neural circuits to solve problems that have so far only been tackled using artificial neural networks. We aim to use this framework to further simplify to make it more deep-learning-like and and also to extend the architecture to make it biologically realistic. Comparing the performance of trained models using different architectures will allow us to dissect the mechanisms underlying production of behavior and will bridge the gap between the electrophysiological dynamics of neural circuits and associated behaviors.

Speakers
HA

Haroon Anwar

Nathan Kline Institute for Psychiatric Research


CNS202 pdf

Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 19

8:00pm CEST

P144: Closed loop parameter estimation using GPU acceleration with GeNN
Google Meets: https://meet.google.com/eku-dfxf-ozz 
 

Felix B. Kern, Michael Crossley, Rafael Levi, György Kemenes, Thomas Nowotny


  A common approach to understanding neuronal function is to build accurate and predictive models of the excitable membrane. Models are typically based on voltage clamp data where ion channels of different types are pharmacologically isolated and the stationary state and timescale of (in)activation are estimated based on the transmembrane currents observed in response to a set of constant voltage steps. The basic method can be extended with different stepping protocols or input waveforms and by performing parameter fits on the full time series. Further improvements are achieved with parameter estimation on additional current clamp data, an active field of research. Some examples of employed estimation approaches include adaptive coupling to synchronise the model to data, driving neurons with chaotic input signals, and using distributions of parameter values in a path integral method.


Enabled by our GPU enhanced neural networks (GeNN) framework [1], we here present work that makes a different conceptual advance of performing parameter estimation in an online closed loop approach while the neuron is being recorded. In doing so we can select stimulations that are highly informative for the parameter estimation process at any given time. We can also track time dependent parameters by observing how parameter estimates develop over time.

To demonstrate our new method we use the model system of the B1 motor cell in the buccal ganglion of the pond snail _Lymnaea stagnalis_. Neurons are recorded with two sharp electrodes in current clamp mode. We have built a conductance based initial model from a published set of Hodgkin-Huxley conductances [2], using standard parameter estimation methods and data we obtained with a simple set of current steps. To perform closed loop parameter estimation, we use a genetic algorithm (GA) in which a population of 8192 model neurons with candidate parameter values is simulated on a GPU (NVIDIA Tesla K40c) in parallel and in real time. Models are then compared to the response of the recorded neuron and selected for goodness of fit, as is standard for a GA approach. The novel element of our method is the next step, where we evaluate a pool of candidate stimuli against the model population, selecting the stimulus with the most diverse responses for the next epoch. The selected stimulus is then applied to both the recorded neuron and the population of models and the normal GA procedure continues.


Fig. 1 shows a representative example of online fitting to a neuron. We first fit a set of 52 parameters to fine-tune model kinetics to the cell under stationary conditions. Then, we restricted fitting to non-kinetic parameters (maximum conductances, equilibrium potentials, and capacitance) and continued to run the algorithm described above, while at the same time manipulating sodium levels in the extracellular bath. The online fitting procedure can detect and track the change in sodium concentration as putative changes in sodium conductance and reversal potential.


Acknowledgments 
This work was partially supported by the EU under Grant Agreement 785907 (HBP SGA2).

References

[1] Yavuz E, Turner JP, Nowotny T. GeNN: a code generation framework for accelerated brain simulations. Scientific Reports 2016, 6, 18854.
[2] Vehovszky A, Szabo H, Elliott CJH. Octopamine increases the excitability of neurons in the snail feeding system by modulation of inward sodium current but not outward potassium currents. BMC Neurosci. 2005, 6, 70.

Speakers
avatar for Felix B. Kern

Felix B. Kern

Research Fellow, IRCN, UTokyo, Japan



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 03

8:00pm CEST

P147: Seizure Forecasting from long-term EEG and ECG Data using Critical Slowing Principle
Wendy Xiong, Tatiana Kameneva, Elisabeth Lambert, Ewan Nurse


Epilepsy is a neurological disorder characterized by recurrent seizures that are transient symptoms of synchronous neuronal activity in the brain. Epilepsy affects more than 50 million people worldwide [1]. In Australia, over 225,000 people live with epilepsy [2]. Seizure prediction allows patients and caregivers to deliver early interventions and prevent serious injuries. Electroencephalography (EEG) has been used to predict seizure onset, with varying success between participants [3, 4]. There is an increasing interest to use electrocardiogram (ECG) to help with seizures detection and prediction. The aim of this study is to use long-term continuous recordings of EEG and ECG data to forecast seizures.

EEG and ECG data from 7 patients was used for analysis. Data was recorded using 21 EEG electrodes and 3 ECG electrodes by Seer with an ambulatory video- EEG-ECG system. The average period of recording was 95 hours (range 51-160 hours). Data was annotated by a clinician to indicate seizure onset and offset. On average, 4 clinical seizures occurred per participant (range 2-10). EEG and ECG data were bandpass filtered using Butterworth filter (1-30 Hz for EEG, 3-45 Hz for ECG).

A characteristic of a system that is nearing a critical transition is critical slowing, which refers to the tendency of the system to take longer to return to equilibrium after perturbations, measured by an increase in signal variance and autocorrelation [5]. The variance and autocorrelation of EEG and ECG signals were calculated for each electrode in 1 s window for each time point. The autocorrelation value was set to the width of half maximum of the autocorrelation function. The instantaneous phases of variance and autocorrelation signals were calculated at each time point using Hilbert transform. To extract long (1 day) and short (20 s in EEG, 10 min in ECG) cycles in the variance and autocorrelation signals, a moving average filter has been applied. The relationship between seizure onset times and phase of variances and autocorrelation were investigated in long and short cycles. The probability distribution for seizure occurrence was determined for each time point. The seizure likelihood was determined at three levels: low, medium and high, based on two thresholds defined as functions of maximum seizure probability. Data analysis was performed in Python 3.

Results show that the variance and autocorrelation of EEG data increased at the time of seizure onset in 66.7% and 68.3% of cases, respectively. The variance and autocorrelation of ECG data increased at the time of seizure onset in 60% and 50% cases, respectively. Long and short cycles of variance and autocorrelation had consistent results. Result indicate that critical slowing may be present in a neural system during seizures and this feature could be used to forecast seizures.

References

[1] Thijs et al. Epilepsy in adults The Lancet, 2019

[2] Facts and Statistics, Epilepsy Action Australia, www.epilepsy.org.au

[3] Cook et al Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in- man study. The Lancet Neurology, 2013

[4] Karoly et al The circadian profile of epilepsy improves seizure forecasting. Brain, 2017

[5] Scheffer et al Early-warning signals for critical transitions. Nature, 2009

Speakers
WX

Wendy Xiong

Swinburne University of Technology


poster pdf

Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 16

8:00pm CEST

P16: A computational model for time cell-based learning of interval timing
Sorinel Oprisan, Tristan Aft, Michael Cox, Mona Buhusi, Catalin Buhusi

Zoom meeting link:  https://cofc.zoom.us/j/4096287484
P108 from 2:00-2:30 PM
P16 from 2:30-3:00 PM

Lesion and pharmacological studies found that interval timing is the emergent property of an extensive neural network that includes the prefrontal cortex (PFC), the basal ganglia (BG), and the hippocampus (HIP). We used our Striatal Beat Frequency (SBF) model with a large number of PFC oscillators to produce beats from the coincidence detection performed by BG [1,2]. The response of the PFC-BG neural network provides an output that (1) accurately identifies the criterion time, i.e., the time at which the reinforcement was presented during reinforced trails, and (2) is scalar, i.e., the prediction error is proportional to the criterion time. We found that, although the PFC-BG can create beats, the accuracy of the timing depends on the number of PRC oscillators and the frequency range they cover [4].

The ability to discriminate between multiple durations requires a metric space in which durations can be compared. We hypothesized that time cells, which were recently discovered in the hippocampus and ramp-up their firing when the subject is at a specific temporal marker in a behavioral test, can offer a time base for interval timing. We expanded the SBF model by incorporating the HIP time cells that (1) provide a natural time base, and (2) could be the cellular root of the scalar property of interval timing observed in all behavioral experiments (bit see [5]). Our model of interval timing learning assumes that there are two stages of this process. First, during the reinforced trials, the subject learns the boundaries of the temporal duration. This process is similar to the HIP space cell activity that first forms an accurate spatial map of the edges of the environment. Subsequently, the time cells are recruited to cover the entire to-be-timed duration uniformly. Without any learning rule, i.e., without any feedback from the PFC-BG network, the population of time cells simply produces a uniform average time field. In our computational model, the learning rule requires the HIP time cell to adjust their activity to mirror the output of the PFC-BG network. A plausible mechanism for the modulation of HIP time cell activity could involve dopamine released during the reinforced trials. We tested numerically different learning rules and found that one of the most efficient in terms of the number of trails required until convergence is a the diffusion-like, or nearest- neighbor, algorithm.

References

[1] Oprisan SA, Aft T, Buhusi M, and Buhusi CV, Scalar timing in memory: A temporal map in the hippocampus, J. Theor. Biol. 2018, 438:133 – 142.

[2] Oprisan SA, Buhusi M, and Buhusi CV, A Population-Based Model of the Temporal Memory in the Hippocampus, Front. Neurosci. 2018, 12:521.

[3] Buhusi CV, Oprisan SA, Buhusi M. Clocks within Clocks: Timing by Coincidence Detection. Curr Opin Behav Sci. 2016, 8: 207-213.

[4] Buhusi CV, Reyes MB, Gathers CA, Oprisan SA, Buhusi M. Inactivation of the Medial-Prefrontal Cortex Impairs Interval Timing Precision, but Not Timing Accuracy or Scalar Timing in a Peak-Interval Procedure in Rats. Front Integr Neurosci. 2018, 12:20.

[5] Oprisan SA, Buhusi CV. What is all the noise about in interval timing? Philos Trans R Soc Lond B Biol Sci. 2014, 369(1637): 20120459.

Speakers
avatar for Sorinel Oprisan

Sorinel Oprisan

Professor, Department of Physics and Astronomy, College of Charleston


timing pdf

Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 17

8:00pm CEST

P183: Universal fingerprints of slow-wave activity in in vivo, in vitro and in silico cortical networks
Looking forward to present you my work here: https://meet.google.com/epq-nghf-xue

Abstract:


Alessandra Camassa
, Andrea Galluzzi, Miguel Dasilva, Beatriz Rebollo, Maurizio Mattia, Maria V. Sanchez-Vives

The cerebral cortex as a structured network is able to spontaneously express different types of dynamics that are continuously changing over time according to the ongoing brain state. Transitions across brain states correlate with changes in network excitability and functional connectivity giving rise to a wide repertoire of spatiotemporal patterns of neuronal activity [1]. The quasi-periodic occurrence of travelling waves - namely slow-wave activity (SWA) - characterizes the cortical networks under unconscious brain states. The spatiotemporal patterns generated under SWA are shaped by the structure and excitability of the underlying network [2,3]. Thus, the emergent wavefronts portray the characteristics of the dynamical regime under which they have been spawn. Here we aimed to develop novel analytical methods to capture wave propagation features and to identify the universal fingerprints of the cortical network activity generated by different preparations all spontaneously expressing SWA, in order to gain a deeper understanding of functional mechanisms underlying the cortical network organization. To do so, we studied the spatiotemporal dynamics of the cortex under SWA in three different frameworks: _in vivo_ , performing extracellular recordings of cortical activity in deeply anesthetized mice with a superficial multielectrode array; _in vitro_ , recording the electrophysiological signals from cortical slices cut from ferret visual cortex; _in silico_ , in a simulated multimodular network of spiking neurons [2,4]. We studied network dynamics by characterizing the spatiotemporal patterns of propagation of the activation wavefronts developing a phase-based method that allow an accurate reconstruction of the waves travelling across the cortex both in experimental and simulated data [5]. We complemented the study of network dynamics with the computation of network synchronization over time, evaluating the variability of ongoing synchrony fluctuations that entail dynamically changing states, in our case Up and Down states of SWA. Finally, we evaluated the dynamical richness of the cortical activity by estimating the dimensionality of the system dynamics over time. We adopted an approach drawn from experimental fluid dynamics in physics [6]. Applying an empirical eigenfunction approach by means of the algorithm of Singular Value Decomposition (SVD) it is possible to quantify the instantaneous energy of the system and its effective dimension, and to study the evolution of the system dimension over time as well as its dependence on the structure and on the dynamical state of the system. In this way, we were able to compare the mechanistic underpinning of SWA when the intact cortex is functionally disconnected ( _in vivo_ under deep anesthesia) and when it is anatomically disconnected fromt he rest of the brain ( _in vitro_ in cortical slices), and finally exploiting the model, to emphasize the universal nature of this slow rhythm highlighting both the differences and similarities between experimental conditions.

Acknowledgements

Founded by the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3) and by MINECO grant BFU2017-85048-R.

References

[1] Stitt et al., _Sci. Rep._ **7(1)** , 1 (2017).

[2] Capone et al., _Cereb. Cortex_ **29(1)** , **** 319 (2019).

[3] Barbero et al., _Brain Stimul._ **12(2)** , e97 (2019).

[4] Mattia et al., _J. Neurosc._ **33(27)** , 11155 (2013).

[5] Muller et al., _Nat. Commun._ **5(2)** , 3675 (2014).

[6] Schiff et al., _PRL_ **98(17)** , 178702 (2007).

Speakers
avatar for Alessandra Camassa

Alessandra Camassa

Systems Neuroscience, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS)



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 01

8:00pm CEST

P186: Inhibitory neurons locate at a center of effective cortical networks, and have high ability to control other neurons.

Information of  the meeting at zoom

https://kyoto-u-edu.zoom.us/j/94228403499?pwd=U0N2cnRRa3RUT1lqMWE4VW05WFJ4QT09

meeting ID : 942 2840 3499
password   : 985511

_________________________________________________________________________________________________

Motoki Kajiwara
, Masanori Shimono

The brain is a network system in which excitatory and inhibitory neurons keep the activity balanced in the highly non-uniform connectivity pattern of the microconnectome. It is well known that the relative percentage of inhibitory neurons is much smaller than excitatory neurons. So, in general, how the inhibitory neurons can keep the balance with the surrounding excitatory neurons is an important question.
This study simultaneously recorded electric signals from ~1000 neurons from seven acute brain slices of mice with a MEA (multi-electrode array) to analyze the network architectures of cortical neurons. Subsequently, we analyzed the spike data to reconstruct the causal interaction networks between the neurons from their spiking activities. The utilized analysis mainly consists of the following four steps: first, transfer entropy was adopted from previous research to reconstruct the neural network. Briefly, transfer entropy quantifies the amount of information transferred between neurons and is suitable for the effective connectivity analysis of neural networks. This allowed to elucidate the Microconnectome and the comprehensive and quantitative characteristics of interaction networks among neurons. Second, our study distinguishes between excitatory synapses and inhibitory synapses using a newly developed method called sorted local transfer entropy. Third, we also applied methods from graph theory to evaluate the network architecture. Especially, we observed that the precedence in centrality and controlling ability of inhibitory neurons. The centrality was quantified with K-core centrality, and the controlling ability was quantified with the ratio of nodes included in FVSs (Feedback Vertex Sets). Fourth, we stained acute brain slices and gave layer labels to individual neurons. Further detail will be shown in [1].
As the result, we found that inhibitory neurons, locating highly central and having strong controlling ability of other neurons, mainly locate in deep cortical layers by comparing with distribution of neurons coloured by NeuN immunostaining data. Preceding the observation, we also found that inhibitory neurons show higher firing rate than excitatory neurons, and that their firing rate also closely obey a log-normal distribution as previously known about excitatory neurons. Additionally, their connectivity strengths also obeyed a log-normal distribution.

Acknowledgements: This study was supported by several MEXT fundings (19H05215, 17K19456) and Leading Initiative for Excellent Young Researchers (LEADER) program, and grants from the Uehara Memorial Foundation.

References
1. Kajiwara M, Nomura R, Goetze F, Akutsu T, Shimono M. Inhibitory neurons are a Central Controlling regulator in the effective cortical microconnectome. bioRxiv. 2020.



Speakers
MK

Motoki Kajiwara

Master course, Kyoto University



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 01

8:00pm CEST

P187: Modulation of the hierarchical gradient of cognitive information processing dynamics during rest and task
Zoom link: 
https://uni-sydney.zoom.us/j/8185360304

Oliver Cliff
, Mike Li, Dennis Hernaus, Lianne Scholtens, Eli Müller, Brandon Munn, Gabriel Wainstein, Ben Fulcher, Joseph Lizier, James Shine

Cognition involves the dynamic adaptation of information processing resources as a function of task demands. To date, the neural mechanisms responsible for mediating this process remain poorly understood. In this study, we integrated cognitive neuroscience with information theory, network topology and neuropharmacology to advance our understanding of the fundamental computational processes that give rise to cognitive function in the human brain. In our first experiment, we consider the contrast between dynamic whole-brain blood oxygen level dependent (BOLD) data from both the resting state and a cognitively-challenging N-back task from the Human Connectome Project (N = 457) [1,2]. We translated the raw BOLD activity levels into time series that represent the dynamics of neural information processing by measuring information flows (pairwise between regions, using transfer entropy) and information storage (self-prediction in individual regions, using active information storage) as a function of time throughout the experiment [3]. Our results show that cognitive task performance alters the whole-brain information-processing landscape in a low-dimensional manner: during rest, information flowed from granular to agranular cortices, whereas this pattern was reversed during the performance of the N-back task. These contrasting gradients of information flow reflect the difference between a stronger "bottom-up" mode during rest (with inputs from sensory cortices sent up for interpretation as the dominant flow) versus a stronger "top-down" mode during task (where task performance is facilitated by higher level control and the increase of associated flows). To test a hypothesized mechanism for this switch [4], we modulated central noradrenaline levels in a double-blind, cross-over atomoxetine pharmacological fMRI study (N = 19) [5]. We found that potentiating the noradrenergic system altered the information processing dynamics by augmenting information transfer to and from the frontoparietal cortices. Together, our results provide a conceptual bridge between cognitive function, network topology, information theory and the ascending neuromodulatory arousal system. References 1\. Barch, D.M., Burgess, G.C., Harms, M.P., et al. Function in the human connectome: task-fMRI and individual differences in behavior. NeuroImage. 2013, 80, 169–189. 2\. Glasser, M.F., Sotiropoulos, S.N., Wilson, J.A., et al. The minimal preprocessing pipelines for the Human Connectome Project. NeuroImage. 2013, 80, 105–124. 3\. Lizier, J.T. JIDT: An information-theoretic toolkit for studying the dynamics of complex systems. Frontiers in Robotics and AI. 2014, 1, 11. 4\. Shine, J.M., Aburn, M.J., Breakspear, M., Poldrack, R.A. The modulation of neural gain facilitates a transition between functional segregation and integration in the brain. eLife. 2018, 7, e31130. 5\. Hernaus, D., Casales Santa, M.M., Offermann, J.S., Van Amelsvoort, T. Noradrenaline transporter blockade increases fronto-parietal functional connectivity relevant for working memory. Eur Neuropsychopharmacol. 2017, 27, 399–410.

Speakers
avatar for Oliver Cliff

Oliver Cliff

School of Physics, The University of Sydney
avatar for Joseph Lizier

Joseph Lizier

Associate Professor, Centre for Complex Systems, The University of Sydney
My research focusses on studying the dynamics of information processing in biological and bio-inspired complex systems and networks, using tools from information theory such as transfer entropy to reveal when and where in a complex system information is being stored, transferred and... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 12

8:00pm CEST

P190: Neural Field Theory: Modelling development of topography via activity based mechanisms.
DISCUSSION LINK: https://meet.google.com/rti-qypu-zdi


Nicholas Gale
, Michael Small

Topographic maps are brain structures which connect two regions [1]. These maps are essential features of primary sensory signal processing. A prototypical animal model of such a system is the mouse retinotopic map [2]. Topography is developed using three distinct mechanisms: chemotaxis, competition, and activity based refinement [3]. Chemotaxis establishes a coarse topography with broad dendritic arbors which is followed by three stages of spontaneously generated waves of electrical activity in the retina: first at E16-P0, then from P0-P11, and finally from P11-P14 [4]. These three periods of have distinct spatio-temporal characteristics and likely perform different functions in the development of the retinotopic system. They are concurrent with electrical activity in the SC and the correlations between these signals guide Hebbian plasticity to make the refinement. Unified models of activity and genetics have found success in predicting the effects of chemical perturbations, but not activity-based perturbations [5]. The activity mechanism in these models condenses the activity into a purely spatial and radially symmetric isotropic form.

A good model of electrical activity in brain regions with lateral connectivity and dense homogenous cell types such as those in the SC is neural field theory (NFT) [6]. A theoretical framework of Hebbian-based plasticity that can incorporate time-signatures of activity has been developed for NFT [7]. This framework allows for the incorporation of a more accurate and complete description of spatio-temporally varying waves. In this paper we shall demonstrate that NFT can support the refinement and establishment of precise topography via waves of propagating activity and biologically reasonable Hebbian learning rules and therefore establish it as a useful model to study the development of topographic systems.

We develop an analytical solution to the field equation by first linearizing the sigmoid activation function. We then proceed with computational analysis of three key parameters: the width of the wave stimulus, wave-speed, and the width of the lateral connections. Finally, we discuss the limitations of the model, implications of these results in the context of the β2 knock-out (an activity perturbation), and future directions.

[1]S B Udin and J W Fawcett. Formation of topographic maps. Annu.Rev.Neurosci.,11:289-327,1988.

[2]James B Ackman, Timothy J Burbridge, and Michael C Crair. Retinal waves coordinate patterned activity throughout the developing visual system. Nature, 490(7419):219-225, 2012.

[3]Jianhua Cang and David A Feldheim. Developmental mechanisms of topographic map formation and alignment. Annu. Rev. Neurosci., 36:51-77, 2013.

[4]A Bansal, J H Singer, B J Hwang, W Xu, A Beaudet, and M B Feller. Mice lacking specific nicotinic acetylcholine receptor subunits exhibit dramatically altered spontaneous activity patterns and reveal a limited role for retinal waves in forming ON and OFF circuits in the inner retina. J.Neurosci., 20(20):7672-7681, 2000.

[6]J J Johannes Hjorth, David C Sterratt, Catherine S Cutts, David J Willshaw, and Stephen Eglen. Quantitative assessment of computational models for retinotopic map formation. Dev. Neurobiol., 75(6):641-666, 2015.

[7]S Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern., 27(2):77-87, 1977

[8]P A Robinson. Neural field theory of synaptic plasticity. J. Theor. Biol., 285(1):156-163, 2011.

Speakers
avatar for Nicholas Gale

Nicholas Gale

Applied Mathematics and Theoretical Physics, University of Cambridge



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 18

8:00pm CEST

P194: Pattern separation based on rate coding in a biologically detailed cerebellar network model
Google Meet session: Session closed. Thank you for your participation!
Feel free to discuss more with Email to the address on the poster PDF.


Ohki Katakura
, Reinoud Maex, Shabnam Kadir, Volker Steuber

The cerebellum is involved in motor learning, temporal information processing and cognition. Inspired by the well-characterised anatomy of the cerebellum, several network models and theories of cerebellar function have been developed, such as the Marr-Albus-Ito theory of cerebellar learning. However, although morphologically realistic cerebellar neuronal models with realistic ion channel dynamics exist in isolation, a complete cerebellar cortical model comprising such biologically detailed neurons is still missing. Sudhakar et al. have implemented a cerebellar granular layer (GL) model composed of biologically detailed granule and Golgi cells (GrCs and GoCs) [1]. Here, we modified this model and integrated it with a multi-compartmental PC model, which included detailed Hodgkin-Huxley type representations of ion channels [2] The original GL model had a length of 1.5 mm along the transversal axis. As parallel fibres (PFs), the axons of GrCs, extend for 2.0 mm along this axis, we rescaled the GL network model to 4.0 mm in transversal direction and placed the dendritic tree of the PC model at the centre of the network. Additionally, to reduce the computational requirements, we employed a sparser density of 1.92 million GrCs per mm3 in our GL model. Each spine of the PC model was connected to the nearest PF within the sagittal-vertical plane, which resulted in 143,725 PF inputs to the PC model. Inhibitory input from molecular layer interneurons (MLIs) to the PC was modelled implicitly by providing inhibitory Poisson input from 1,695 spike generators. Most of our simulations were run with 5 Hz MF background excitation and 8 Hz background MLI inhibition, which resulted in PC baseline spike rates between 50 and 60 Hz.

In a first set of simulations, our network was tested in a simple pattern separation task: a patch of excitatory mossy fibre (MF) input to the GL was stimulated; the network learnt the input pattern based on long-term depression (LTD) at PF-PC synapses; and the PC behaviour in response to learnt and novel patterns was compared. The stimulated MF patch had a radius of 100 um. The stimulation resulted in the activation of a cylindrical region of the GL above the patch. Activated GoCs spread out of the patch along the transversal axis. The initial GrC excitation lasted for about 5 ms, after which feedback inhibition from GoCs reduced the GrC spike rate to about 50% of the peak value. The resulting burst of GrC activity activated the PC model with a delay up to 5 ms. In the presence of a sufficient amount of MLI inhibition, the PC firing rate initially increased sharply in response to stimulation of the MF patch. After the MF input had been learnt based on LTD at the PF-PC synapses, the PC spike rate increases in response to learnt MF input disappeared, while equivalent novel MF stimuli still resulted in spike rate increases. These simulation results predict that a biophysically detailed PC model embedded in a realistic cerebellar network model can, under certain circumstances, employ a rate code to distinguish between learnt and novel MF input patterns.

References

1. Sudhakar S.K., Hong S., Raikov I., et al. Spatiotemporal network coding of physiological mossy fiber inputs by the cerebellar granular layer. PLoS Comput. Biol. 2017, 13(9), e1005754.
2. De Schutter E., Bower J.M.. An active membrane model of the cerebellar Purkinje cell. I. Simulation of current clamps in slice. J. Neurophysiol. 1994, 71(1), 375-400.

Speakers
OK

Ohki Katakura

PhD student, Centre for Computer Science and Informatics Research, University of Hertfordshire
Numerical modelling of biological neural networks; Cerebellum; Basal ganglia;



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 06

8:00pm CEST

P195: Associative memory performance in peripherally-lesioned networks repaired by homeostatic structural plasticity
Ankur Sinha, Christoph Metzner, Rod Adams, Neil Davey, Michael Schmuker, Volker Steuber

Google Meet link: https://meet.google.com/zek-rkja-voy

In spite of a plethora of peripheral lesion experiments documenting that structural plasticity causes large scale changes in brain networks [1, 3, 4], our understanding of the mechanisms of structural plasticity remains limited. Structural plasticity acts over extended periods of time, albeit at a slow rate, to modify network connectivity by the formation and removal of synapses. Alterations in network connectivity are expected to affect network function, but the resulting functional consequences of structural plasticity have not been studied in detail.

To study the activity dependent growth characteristics of neurites, which underlie network reconfiguration, we previously developed a novel model of peripheral lesioning and subsequent repair in a balanced cortical Asynchronous Irregular (AI) spiking network [6]. The network used in our model, which represents a physiological brain network, was selected since it has been demonstrated to function as an attractor-less associative memory store [5]. Using this new model, we investigated the functional effects of repair mediated by homeostatic structural plasticity on the network. We stored associative memories in the network and recalled them at different stages of the simulation by stimulating a random subset of their neurons: before deafferentation, after deafferentation but before repair, and after deafferentation during repair. At each recall, recall performance was quantified using a Signal to Noise ratio (SNR) metric [2].

Associative memories that include neurons deafferented by the peripheral lesion experience a reduction in their recall performance proportionate to the number of deprived neurons. Our results indicate that while structural plasticity restores activity of deafferented neurons to pre-injury levels, it does not restore the performance of the stored associative memories. This suggests that associative memories stored before a peripheral lesion are not necessarily protected in the repair process. Further research is needed to explore whether the repair process can be modulated to retain the performance of the stored associative memories.

References
1. Rasmusson, D. D. Reorganization of raccoon somatosensory cortex following removal of the fifth digit. Journal of Comparative Neurology 205, 313–326 (1982).
2. Dayan, P. & Willshaw, D. J. Optimising synaptic learning rules in linear associative memories. Biological Cybernetics 65, 253–265 (1991).
3. Keck, T., Mrsic-Flogel, T. D., Afonso, M. V., Eysel, U. T., Bonhoeffer, T. & Hübener, M. Massive restructuring of neuronal circuits during functional reorganization of adult visual cortex. Nature neuroscience 11, 1162–1167 (2008).
4. Keck, T., Scheuss, V., Jacobsen, R. I., Wierenga, C. J., Eysel, U. T., Bonhoeffer, T., et al. Loss of sensory input causes rapid structural changes of inhibitory neurons in adult mouse visual cortex. Neuron 71, 869–882. ISSN : 0896-6273. (2011).
5. Vogels, T. P., Sprekeler, H., Zenke, F., Clopath, C. & Gerstner, W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 334, 1569–1573. (2011).
6. Sinha, A., Metzner, C., Davey, N., Adams, R., Schmuker, M. & Steuber, V. Growth Rules for the Repair of Asynchronous Irregular Neuronal Networks after Peripheral Lesions. bioRxiv. eprint: nttps://www.biorxiv.org/content/early/2019/10/21/810846.full.pdf. (2019).

Speakers
avatar for Ankur Sinha

Ankur Sinha

Post doctoral research fellow, Silver Lab at University College London, UK



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 12

8:00pm CEST

P203: A neural mechanism of working-memory manipulation in a categorization-task performance
Yoshiki Kashimori, Hikaru Tokuhara

Working memory has a function by which temporal information is maintained and recognized in the brain, and is ubiquitous in various brain regions. A growing body of working memory research has indicated the neural mechanism underlying the maintenance of working memory. Several studies have demonstrated that working memory is maintained by a persistent activity of neural assemblies. Other studies have proposed that it is maintained by a short-term synaptic plasticity. The mechanism of working memory maintenance is still a matter of debate. Furthermore, it is unclear how working memory is linked to behavior and decision-making.

In this study, to clarify the neural mechanisms underlying the maintenance and manipulation of working memory, we focus on the function of prefrontal cortex in a delayed match-to-categorization task studied by Freedman et al. [1]. In this task, monkeys were presented with a sample and a test stimulus, separated by a delay period, and were trained to judge whether these stimuli were from the same category. Freedman et al. showed that working memory of category information was formed in the PFC. Our previous model demonstrated the neural mechanism of the working memory shaped in the PFC [2]. In this study, we aim to understand a unified mechanism of working memory maintenance and its manipulation for behavior. We develop a network model that performs the maintenance and recognition of temporal information of a sample and a test stimulus. The model consists of the networks of IT and PFC. The PFC model is further constructed with a positive-feedback-loop layer, a recurrent network, and a decision layer. The positive-feedback-loop layer produces a persistent activity of a previously presented stimulus, allowing the layer to maintain information of a sample stimulus as working memory. The recurrent network encodes the temporal information of a sample stimulus and a test stimulus. The learning of temporal information was made by Backpropagation Through Time method. The decision layer has neurons responding to a match and a non-match trial. We also investigate the discrimination ability of our model for more complex tasks that have longer temporal sequences and many category numbers.

We demonstrate that maintenance of working memory and encoding of temporal sequence are sequentially manipulated in different areas of the PFC. We also show that the temporal sequence is encoded by activity pattern of the recurrent circuit, independently of task decision. The sparseness of activity pattern increases with increasing the number of category. The principal component analysis of activity patterns reveals that the activity patterns of non-match trials move far away from the activity patterns of match trials as the learning proceeds. Furthermore, we show that the decision of task trials is adjusted by the learning of the connections between recurrent neurons representing the activity patterns and decision neurons, according to task context.

[1] Freedman, DJ et al. J Neurosci 23, 5235-5246, 2003.

[2] Abe, Y et al. Cog Comput 10, 687–702, 2018.

Speakers
YK

Yoshiki Kashimori

Dept. of Engineering Science, The University of Electro-Communications



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 15

8:00pm CEST

P20: Effects of dopamine on networks of barrel cortex
Google Meet session: meet.google.com/bko-hsja-qee
Fleur Zeldenrust
, Chao Huang, Prescilla Uijtewaal, Bernhard Englitz, Tansu Celikel

The responses of excitatory pyramidal cells and inhibitory interneurons in cortical networks are shaped by each neuron's place in the network (connectivity of the network) and its biophysical properties (ion channel expression [1]), which are modulated by top-down neuromodulatory input, including dopamine. Using a recently developed ex-vivo method [2], we showed [3] that the activation of the D1 receptor (D1R) increases the information transfer of fast spiking, but not regular spiking, cells, by decreasing their threshold. Moreover, we showed that these differences in neural responses are accompanied by faster decision-making on a behavioural level. However, how the single-cell changes in spike responses result in these behavioural changes is still unclear. Here, we aim to bridge the gap between behavioural and single cell effects by considering the effects of D1R activation on a network level.

We took a 3-step approach and simulated the effects of dopamine by lowering the thresholds of inhibitory but not excitatory neurons:

1. Network construction. We created a balanced network of L2/3 and L4 of the barrel cortex, consisting of locally connected integrate-and-fire neurons. We reconstructed the somatosensory cortex in soma resolution ([4], Fig 1A), and adapted the number and ratio of excitatory and inhibitory neurons and the number of thalamic inputs accordingly. 2. Activity of the balanced state. The adaptations in the neural populations and connectivity resulted in a heterogeneous asynchronous regime [5] in L2/3, with highly variable single-neuron firing rates and suggesting a functional role of stimulus separation, and a ‘classical’ asynchronous regime in L 4, with more constant firing rates and suggestive of an information transmission role (Fig 1B). 3. Functional effects. We used a spike-based FORCE learning [6,7] application, trained on either a gap-crossing task (data from [8]) or on a pole detection task (publicly available data from [9], Fig1C). We compared the results against a benchmark test consisting of a 3-layer deep neural net with a recurrent layer.

References

[1] Azarfar A, Calcini N, Huang C, et al. Neural coding: A single neuron’s perspective. Neurosci Biobehav Rev 2018;94:238–47.

[2] Zeldenrust F, de Knecht S, Wadman WJ, et al. Estimating the Information Extracted by a Single Spiking Neuron from a Continuous Input Time Series. Front Comput Neurosci 2017;11:49.

[3] Calcini N, Bijlsma A, Zhang Y, et al. Cell-type specific modulation of information transfer by dopamine. Cosyne Abstr. 2019 Lisbon PT, 2019.

[4] Huang C, Zeldenrust F, Celikel T. DepartmentofNeurophysiology/Cortical- representation-of-touch-in-silico. GitHub 2019. https://github.com/DepartmentofNeurophysiology/Cortical-representation-of- touch-in-silico (accessed March 2, 2020).

[5] Ostojic S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nat Neurosci 2014;17:594–600.

[6] Sussillo D, Abbott LF. Generating Coherent Patterns of Activity from Chaotic Neural Networks. Neuron 2009;63:544–57.

[7] Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nat Commun 2017;8:1–15.

[8] Azarfar A, Zhang Y, Alishbayli A, et al. An open-source high-speed infrared videography database to study the principles of active sensing in freely navigating rodents. GigaScience 2018;7.

[9] Peron SP, Freeman J, Iyer V, et al. A Cellular Resolution Map of Barrel Cortex Activity during Tactile Behavior. Neuron 2015;86:783–99.

Speakers
avatar for Fleur Zeldenrust

Fleur Zeldenrust

Associate professor, Donders Institute for Brain, Cognition and Behaviour, Radboud University
I am an Associate Professor at the Neurophysics section, Donders Center for Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands. The brain continuously processes information. The physical structure of the brain (its ‘hardware... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 02

8:00pm CEST

P213: Local synaptic connections alter spike responses and signal propagation in models of globus pallidus pars externa
Google meet link
Erick Olivares, Matthew Higgs, Charles WilsonGlobus pallidus pars externa (GPe) has been seen as a relay nucleus in the indirect pathway of the basal ganglia, which simply inverts the inhibitory signal arriving from striatal projection neurons. In this view, the information flowing through GPe runs in parallel paths that do not interact with one another. However, GPe neurons are fast autonomous oscillators that project axon collaterals spanning a wide area, creating an active local inhibitory network. How does the local connectivity affect GPe neurons' steady-state firing and responses to external synaptic input? To answer this question, we constructed network models of GPe, using experimental data to model neurons and synapses and different network architectures to explore the functions of local and longer-range connectivity patterns inside GPe. Our results show that the connectivity affects the spike responses to external input, as well as the dynamics of signal propagation within GPe.

Google meet link

Speakers
EO

Erick Olivares

Postdoc, Department of Biology, Universidad de Texas at San Antonio



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 18

8:00pm CEST

P221: Visualization of pathways of potential neurostructures in neurorehabilitation period based on MRI data processing
Margarita Zaleshina, Alexander Zaleshin

Poster Link: https://drive.google.com/file/d/11B-s0cmb8kIyTHi9r3LK4IGdsLSl_9bs/view?usp=sharing
Session Link: https://meet.google.com/iki-sgvg-how



Growth, formation and movement of biological structures are determined by characteristics of the environment and requirements for obtaining external resources. Likewise, the topological organization of the brain, consisting of a set of neurostructures, has a direct effect on the brain's ability to perceive or process data. Additionally, localized damage to a small part of the brain will result in specific disturbances of isolated mental facilities, such as perception or movement [1]. Many researchers are currently studying regeneration and formation of a new spatial filling of tissue at the sites of damage. The individual variability of the anatomy and connectivity of the brain affects the formation of its structure [2]. Studies of both tissue features and the distribution and orientation of individual components are widely used to visualize the microstructures of individual brain regions or to determine the locations of biomarkers [3]. At the same time, it can be shown that neurorehabilitation depends not only on the characteristics of the whole brain, but also on the particular features of the distinct area where growth and recovery occur directly.

In this work, we study cases of regeneration of cortical neurostructures, when the damaged area is filled with new elements for a long period of time. The analysis compares the calculated growth directions of neurostructures, the calculated trajectories of their growth, taking into account the existing environment, and the real growth paths identified on the basis of MRI data.

Our study takes into account that the ways of formation of neural structures during neurorehabilitation have two main characteristics that differ in scale and in details. The first characteristic is the average direction of the formation of new neurostructures. Such a direction, as a whole, is caused by an increase in the “favorableness” of the environment in which growth occurs. The second characteristic is a detailed following of external elements in the existing biological environment, that is, on the one hand, rounding obstacles, and on the other hand, the use of convenient “corridors” for growth and advancement (Fig. 1).

Data packages (fMRI) are collected from Human Connectome Project (https://www.humanconnectome.org/data/). These fMRI could be converted to diffusion-weighted images (dMRI), which are used for tractography analysis and for investigate the heterogeneity of microstructural features.

The study uses spatial data analysis, which calculates the main corridors and growth directions, taking into account the available cortical volume filling. Data at the boundaries of tissue are excluded from analysis to minimize the impact of partial volume averaging with surrounding tissues.

References

1\. Eickhoff SB, Constable RT, Yeo BTT: Topographic organization of the cerebral cortex and brain cartography. Neuroimage 2018, 170: 332–347.

2\. Maier-Hein L, Eisenmann M, Reinke A, Onogur S, Stankovic M, Scholz P, et al: Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 2018, 9(1): 5217.

3\. Fick RHJ, Wassermann D, Deriche R: The Dmipy Toolbox: Diffusion MRI Multi- Compartment Modeling and Microstructure Recovery Made Easy. Front. Neuroinform. 2019, 13: 64.

Speakers
avatar for Margarita Zaleshina

Margarita Zaleshina

Moscow Institute of Physics and Technology



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 09

8:00pm CEST

P27: Lessons from Artificial Neural Network for studying coding principles of Biological Neural Network
Google Meet link: https://meet.google.com/mnb-ixfu-sff

If you miss the presentation or have further questions, please feel free to contact me! Thanks:)
qogywls1573@gmail.com 


Hyojin Bae
, Chang-eop Kim, Gehoon Chung

An individual neuron or neuronal population is conventionally said to be “selective” to a feature of stimulus if they differentially respond to the feature. Also, they are considered to encode certain information if decoding algorithms successfully predict a given stimulus or behavior from the neuronal activity. However, an erroneous assumption about the feature space could mislead the researcher about a neural coding principle. In this study, by simulating several likely scenarios through artificial neural networks (ANNs) and showing corresponding cases of biological neural networks (BNNs), we point out potential biases evoked by unrecognized features i.e., confounding variable.

We modeled an ANN classifier with the open-source neural network library Keras, running Tensorflow as backend. The model is composed of five hidden layers, dense connections and rectified linear activation. We added a dropout layer and l2-regularizer on each layer to apply penalties on layer activity during optimization. The model was trained with CIFAR-10 dataset and showed a saturated test set accuracy at about 53%. (the chance level accuracy = 10%) For a stochastic sampling of individual neuron’s activity from each deterministic unit, we generated the Gaussian distribution through modeling within-population variability according to each assumption.

Using this model, we showed 4 possible misinterpretation cases induced by a missing feature. (1). The researcher can choose the second-best feature which has similarity to ground truth feature. (2). An irrelative feature which correlated with ground truth feature can be chosen. (3). Evaluating decoder in incomplete feature space could result in the overestimation of the performance of the decoder. (4). Misconception about the receptive field of the unit could make a signal to be incorporated in noise.

In conclusion, we suggest that the comparative study of ANN and BNN from the perspective of machine learning can be a great strategy for deciphering the neural coding principle.

Speakers
avatar for Hyojin Bae

Hyojin Bae

PhD Candidate, Gachon university



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 11

8:00pm CEST

P31: Exploring fast and slow neural correlates of auditory perceptual bistability with diffusion-mapped delay coordinates
Link for poster: https://meet.google.com/fra-scrk-ymb

Pake Melland
, Rodica Curtu

****Perceptual bistability is a phenomenon in which an observer is capable of perceiving identical stimuli with two or more interpretations. The auditory streaming task has been shown to produce spontaneous switching between two perceptual states [1]. In this task a listener is presented a stream of tones, called triplets, with the pattern ABA\-- where A and B are tones with different frequencies and '--' is a brief period of silence. The listener can alternate between two perceptual states: 1-stream in which the stimulus is integrated into a single stream, and 2-stream in which the stimulus is perceived as two segregated streams. In order to study the localization and dynamic properties of neural correlates of auditory streaming we collected electrocorticography (ECoG) data from neurosurgical patients while they listened to sequences of repeated triplets and self-reported switching between the two perceptual states.

****It is necessary to find meaningful ways to analyze ECoG recordings, which are noisy and inherently high dimensional. Diffusion Maps is a non-linear dimensionality reduction technique which embeds high dimensional data into low dimensional Euclidean space [2]. The Diffusion Map method leverages the creation of a Markov matrix from a similarity measure on the original data. Under reasonable assumptions, the eigenvalues of the Markov matrix are positive and bounded above by 1. The d largest eigenvalues along with their respective eigenvectors provide coordinates for an embedding of the data into d-dimensional Euclidean space. In [3] Diffusion Maps were used for a group level analysis of neural signatures during auditory streaming based on subject reported perception. We extend this approach by taking into account the time ordered property of the ECoG signals. For data that has a natural time ordering, it is beneficial to structure the data to emphasize its temporal dynamics; in [4] the authors develop the Diffusion- Mapped Delayed Coordinates (DMDC) algorithm. In this algorithm, time-delayed data is first created from general time series data; this initial step projects the data onto its most stable sub-system. The stable sub-system may remain in a high dimensional space, so they next apply Diffusion Maps to the time-delayed data which projects the (potentially high dimensional) stable sub-system onto a low dimensional representation adapted to the dynamics of the system.

We apply the DMDC algorithm to ECoG recordings from Heschl’s Gyrus in order to explore and reconstruct the underlying dynamics present during the auditory steaming task. We find that the eigenvalues obtained through the DMDC algorithm provide a way to uncover multiple time scales present in the underlying system. The corresponding eigenvectors form a Fourier-like basis that is adapted both to the fast properties of ECoG signal encoding the physical properties of the stimulus as well as a slow mechanism that corresponds to perceptual switching reported by subjects.

Acknowledgments: National Science Foundation, CRCNS grant 1515678, and The Human Brain Research Lab, University of Iowa, Iowa (Matthew A. Howard & Kirill Nourski).

References:

[1] van Noorden et al. Temporal coherence in the perception of tone sequences, volume 3. Institute for Perceptual Research Eindhoven, the Netherlands, 1975.

[2] Coifman & Lafon. Applied and Comp Harmonic Anal, 21(1):5–30, 2006.

[3] Curtu et al J Neurosci, 2019.

[4] Berry et al SIADS 12(2):618–649, 2013.

Speakers
PM

Pake Melland

PhD Student, University of Iowa
I am a PhD student in the Applied Mathematical & Computational Sciences at The University of Iowa.  I am interested in data driven methods for the analysis and modeling of dynamical processes.



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 07

8:00pm CEST

P43: Responses of a Purkinje cell model to inhibition and excitation
Please join my poster presentation via Zoom on:
https://oist.zoom.us/j/97620440229?pwd=SUpuQ2VyZHNMM1huVkNGRk1ic2pPUT09



Gabriela Capo Rangel
, Erik De Schutter

Although the effects of inhibition on Purkinje cells have first been first observed over five decades ago and have since then been intensively studied, the manner in which the cerebellar output is regulated by both inhibitory and excitatory cells has yet to be fully understood. Purkinje cells represent the sole output of the cerebellar cortex and are known to fire simple spikes as a result of the integrated excitatory and inhibitory synaptic input originating from parallel fibers and the interneurons in the molecular layer. When studied in vivo, both Purkinje cells and interneurons exhibit a highly irregular pattern in the firing of action potentials. The mechanisms underlying the complex interaction between the intrinsec properties of the membrane and the pattern of synaptic inputs that generate the cerebellar output have not yet been completely understood. Recent literature has underlined the importance of the inhibitory interneurons (stellate and basket cells) in shaping the simple spikes of Purkinje cells. Moreover, when inhibitory interneurons are eliminated and only asynchronous excitation is taken into account, numerous computational [1] and experimental work have reported unrealistic behavior such as very little variability between the spiking intervals, as well as very small minimum firing frequencies. The modeling approach we propose here focuses on analyzing the effects that combined inhibition and excitation have on the shape of action potential, on the firing frequency and on the time intervals in between the simple spikes. The starting point of our work was a very detailed Purkinje cell model proposed by Zang et al in [2]. Instead of varying somatic holding currents as in previous work, in here, the dendritic voltage states are determined by the balance between the frequency of inhibitory cells and the frequency of parallel fibers. Our preliminary results indicate that inhibition presents both subtractive and divisive behavior, depending on stellate cells frequency. We discuss in detail the different shapes of firing we obtained. In particular, our results capture not only simple spikes but also a trimodal firing pattern, previously observed experimentally in [3]. This trimodal firing pattern is a characteristic of mature Purkinje cells and is given by a mixture of three different phases: tonic firing, bursting and silent mode. We mapped the regions in which simple spiking occur and the regions in which simple spikes appear and we further investigate the role of the SK2 channels in eliminating or prolonging the trimodal pattern.

Bibliography:

[1]. De Schutter E, Bower JM, ” An Active Membrane Model of the Cerebellar Purkinje Cell II. Simulation of Synaptic Responses”, Journal of Neurophysiology (1994), 71(1), 401-419.

[2]. Zang Y, Dieudonne S, De Schutter E, ” Voltage- and Branch-Specific Climbing Fiber Responses in Purkinje Cells”, Cell Reports (2018) 24, 1536-1549.

[3]. Womack MD, Khodakhah K, ”Somatic and Dendritic Small-Conductance Calcium- Activated Potassium Channels Regulate the Output of Cerebellar Purkinje Neurons”, The Journal of Neuroscience (2003), 23(7), 2600- 2607.

Speakers
avatar for Gabriela Capo Rangel

Gabriela Capo Rangel

Postdoc, Computational Neuroscience, Okinawa Institute of Science and Technology
I am currently working as a postdoc in Erik De Schutter's lab at Okinawa Institute of Science and Technology. My work consists in developing a Purkinje Cell model that can unveil the mechanisms underlying the dendritic spikes that are triggered by parallel fibers.My background is... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 09

8:00pm CEST

P4: Self-organization of connectivity in spiking neural networks with balanced excitation and inhibition
Jihoon Park, Yuji Kawai, Minoru Asada

Google meeting: https://meet.google.com/qbn-tamz-evz

Atypical neural activity and structural network changes have been detected in the brains of autism spectrum disorder (ASD) [1]. It has been hypothesized that an imbalance in the activity of excitatory and inhibitory neurons causes the pathological changes in autistic brains, denoted by the E/I balance hypothesis [2]. In this study, we investigate the effect of E/I balance on the self-organization of network connectivity and neural activity using a model approach. Our model follows the Izhikevich spiking neuron model [3], and consists of three neuron groups, each composed of 800 excitatory neurons and _N_ I inhibitory neurons (Fig.1A). Each excitatory neuron had 100 intraconnections with randomly selected neurons in the same neuron group, and 42 inter-connections with randomly selected neurons in its neighboring neuron group. These synaptic weights were modified using the Spike-timing-dependent plasticity rule [3]. Each inhibitory neuron had 100 intraconnections with randomly selected excitatory neurons in the same neuron group, but they did not have any interconnections nor plasticity. We simulated the model with different and inhibitory synaptic weights ( _W_ I) in one neuron group (neuron group 1 in Fig.1A) to change the degree of inhibition in the neuron group. _N_ I and _W_ I in the other groups (2 and 3 in Fig.1A) were set to 200 and -5, respectively. The simulation results show greater intraconnections in all neuron groups when _N_ I and _W_ I were lower values, i.e., the E/I ratio increased compared to those in the typical E/I ratio (Fig.1B). Moreover, asymmetric interconnections between neuron groups emerged where the synaptic weights from neuron groups 2 to 1 were higher than when the connectivity was in the opposite direction (Fig.1C), where the E/I ratio was found to increase. Furthermore, the phase coherence between the average potentials of neuron groups was found to be weak with an increased E/I ratio (Fig.1D). These results indicate that the disruption of the E/I balance, especially the weak inhibitory, induces excessive local connections and asymmetric intergroup connections. Therefore, the synchronization between neuron groups decreases, i.e., there is a weak long-range functional connectivity. These results suggest that the E/I imbalance might cause strong local anatomical connectivity and weak long-range functional connectivity in the brains of ASD [1].

Acknowledgements

This work was supported by JST CREST Grant Number JPMJCR17A4 including the AIP challenge (conceptualization and resources), and a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO; implementation and data curation).

References

1. Belmonte, M.K. Allen, G. Beckel-Mitchener, A. Boulanger, L.M. Carper, R.A. Webb, S.J. Autism and abnormal development of brain connectivity. _J. Neurosci.,_ 2004, 24, 9228–9231.

2. Nelson, S.B. Valakh, V. Excitatory/Inhibitory Balance and Circuit Homeostasis in Autism Spectrum Disorders. _Neuron,_ 2015, 87, 684–698.

3. Izhikevich, E.M. Polychronization: Computation with spikes. _Neural Comput._ , 2006, 18, 245–282.

Speakers
avatar for Jihoon Park

Jihoon Park

Specially Appointed Assistant Professor, Osaka University



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 10

8:00pm CEST

P53: A simple, non-stationary normalization model to explain and successfully predict change detection in monkey area MT
Detlef Wegener, Xiao Chen, Lisa Bohnenkamp, Fingal Orlando Galashan, Udo Ernst

Successful visually-guided behavior in natural environments critically depends on rapid detection of changes in visual input. A wildcat chasing a gazelle needs to quickly adapt its motions to sudden direction changes of the prey, and a human driving a fast car on a highway must instantaneously react to the onset of the red brake light of the car in front. Visually responsive neurons represent such rapid feature changes in comparably rapid, transient changes of their firing rate. In the motion domain, for example, neurons in monkey area MT were shown to represent the sign and magnitude of a rapid speed change in the sign and amplitude of the evoked firing rate modulation following that change [1]. For positive speed changes, it was also shown that the transient’s latency closely correlates with reaction time, and is modulated by both spatial and non-spatial visual attention [2,3].

We here introduce a computational model based on a simple, canonical circuit in a cortical hypercolumn. We use the model to investigate the computational mechanisms underlying transient neuronal firing rate changes and their modulation by attention under a wide range of stimulus conditions. It is built of an excitatory and an inhibitory unit, both of which are in response to an external input _I_ (t). The excitatory unit receives additional divisive input from the inhibitory unit. The model’s dynamics is described by two differential equations quantifying how mean activity _A_ e of the excitatory unit and divisive input current change with time _t_. By fitting the model parameters to experimental data, we show that it is capable to reproduce the time courses of transient responses under passive viewing conditions. Mathematical analysis of the circuit explains hallmark effects of transient activations and identifies the relevant parameters determining response latency, peak response, and sustained activation. Visual attention is implemented by a simple multiplicative gain to the input of both units.

A key result of the analysis of the model’s dynamics is that steeper rise or decay times of the transient provide a consistent mechanisms of attentional modulation, independent of both the overall activation of the neuron prior to the speed change, and the sign of the change. This prediction is tested by new experiments requiring attention to both positive and negative speed changes. The results of the experiment are in full accordance with the prediction of the model, providing evidence that even decreases in firing rate in response to the reduction of the speed of an attended stimulus occur with shorter latency. Thus, the model provides a unique framework for a mechanistic understanding of MT response dynamics under very different sensory and behavioral conditions.

References

1\. Traschütz A, Kreiter AK, Wegener D. Transient activity in monkey area MT represents speed changes and is correlated with human behavioral performance. J Neurophysiol 2015, 113, 890-903.

2\. Galashan FO, Saßen HC, Kreiter AK, Wegener D. Monkey area MT latencies to speed changes depend on attention and correlate with behavioral reaction times. Neuron 2013, 78, 740-750.

3\. Schledde B, Galashan FO, Przybyla M, Kreiter AK, Wegener D. Task-specific, dimension-based attentional shaping of motion processing in monkey area MT. _ _ J Neurophysiol 2017, 118, 1542-1555.

Acknowledgments

Supported by BMBF grant 01GQ1106 and DFG grant WE 5469/3-1.

Speakers
DW

Detlef Wegener

Brain Research Institute, University of Bremen


Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 16

8:00pm CEST

P60: Cholinergic modulation can produce rapid task-related plasticity in the auditory cortex
Jordan Chambers, Shihab Shamma, Anthony Burkitt, David Grayden, Diego Elgueda, Jonathan Fritz

Neurons in the primary auditory cortex (A1) display rapid task-related plasticity, which is believed to enhance the ability to selectively attend to one stream of sound in complex acoustic scenes. Previous studies have suggested that cholinergic projections from Nucleus Basalis to A1 modulate auditory cortical responses and may be a key component of rapid task related plasticity. However, the underlying molecular, cellular and network mechanisms of cholinergic modulation of cortical processing remain unclear.

A previously published model of A1 receptive fields [1] that can reproduce task-related plasticity was used to investigate mechanisms of cholinergic modulation in A1. The previous model comprised a cochlea model and integrate- and-fire model neurons to represent networks in A1. Action potentials from individual model neurons were used to calculate the receptive field using reverse correlation, which allowed direct comparison to experimental data. To allow an investigation into different mechanisms of cholinergic modulation at A1, this previous model was extended by: (1) adding integrate-and-fire neurons to represent neurons projecting from Nucleus Basalis to A1; (2) adding inhibitory interneurons in A1; (3) including internal calcium dynamics in the integrate-and-fire models; and (4) including calcium-dependent potassium conductance in the integrate-and-fire models. Since cholinergic modulation has several potential sites of action in A1, the current model was used to investigate acetylcholine acting through both muscarinic and nicotinic acetylcholine receptors (mAChR and nAChR, respectively) located presynaptically or postsynaptically.

Four possible mechanisms of cholinergic modulation on A1 receptive fields were investigated. Previous research indicates cholinergic modulation should be able to suppress an inhibitory region and enhance an excitatory region in the receptive fields [2]. Our model indicates it is unlikely that any one of these four mechanisms could produce these opposite changes to both excitatory and inhibitory regions. However, multiple mechanisms occurring simultaneously could produce the expected changes to the receptive fields in this model. We demonstrate that combining either presynaptic nAChR with presynaptic mAChR or presynaptic nAChR with postsynaptic nAChR is capable of producing changes to A1 receptive fields observed during rapid task-related plasticity.

This model tested four mechanisms by which cholinergic modulation may induce rapid task-related plasticity in A1. Cholinergic modulation could reproduce experimentally observed changes to A1 receptive fields when it was implemented using a combination of mechanisms. Two different combinations of cholinergic modulation were found to produce the expected changes in A1 receptive fields. Since the model predicts that these two different combinations of cholinergic modulation would have differential effects on the rate of neuronal firing, it will be possible to run experimental tests to distinguish between the two theoretic possibilities.

References

[1] Chambers JD, Elgueda D, Fritz JB, Shamma SA, Burkitt AN, Grayden DB: **Computational neural modeling of Auditory Cortical Receptive Fields**. _Front Comput Neurosci_ 2019, 13:28. Epub 2019/06/11.

[2] Fritz J, Shamma S, Elhilali M, Klein D: **Rapid task-related plasticity of spectrotemporal receptive fields in primary auditory cortex**. _Nat Neuroscience_ 2003, 6(11):1216-23.

Speakers
JD

Jordan David Chambers

Department of Biomedical Engineering, University of Melbourne


Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 05

8:00pm CEST

P70: Influence of anatomical connectivity and intrinsic dynamics in a connectome based neural mass model of TMS-evoked potentials
Join us Sunday 2-3pm EST on Google Meet:  meet.google.com/pui-jbeg-ezr


Neda Kaboodvand
, John Griffiths

Perturbation via electromagnetic stimulation is a powerful way of probing neural systems to better understand their functional organization. One of the most widely used neurostimulation techniques in human neuroscience is transcranial magnetic stimulation (TMS) with concurrently recorded electroencephalography (EEG). The immediate EEG responses to single-pulse TMS stimulation, termed TMS-evoked potentials (TEPs), are spatiotemporal waveforms in EEG sensor- or source-space[1]. TEPs display several characteristic features, including i) rapid wave-like propagation away from the primary stimulation site, and ii) multiple volleys of recurrent activity, that continue for several hundred milliseconds following the stimulation pulse. These TEP patterns reflect reverberant activity in large-scale cortico- cortical and cortico-subcortical brain networks, and have been used to study neural excitability in a wide variety of research contexts, including sleep, anaesthesia, and coma[2]. There has been relatively little work done, however, on computational modelling of TEP waveform morphologies, and how these spatiotemporal patterns emerge from a combination of global brain network structure and local physiological characteristics. Here we present a novel connectome-based neural mass model of TEPs that accurately reproduces recordings across multiple subjects and stimulation sites. We employ a biophysical electric field model (using the simnibs[3] library) to identify the electrical field (‘E-field’) distribution over the cortical surface resulting from stimulation at a given TMS coil location and orientation, that is based on T1-weighted MRI-derived cortical geometry, and personalized to individual subjects. These TMS-induced E-field maps are then summed to yield a current injection pattern over regions in a canonical freesurfer-based brain parcellation. Whole-brain neural activity is modelled with a network of oscillatory (Fitzhugh-Nagumo) units[4,5], coupled by anatomical connectivity weights derived from diffusion-weighted MRI tractography[6], and perturbed by a brief square-wave current injection weighted regionally by the cortical E-field map magnitudes. Using this model we are able to accurately reproduce the typical radially propagating TEP patterns under a wide range of parameter values. For the later (150ms+) TEP components however, we find that it is necessary to modify the weight of cortico-thalamic and thalamo-cortical projections in the tractography-defined anatomical connectivity (see also [7]), which has the effect of promoting recurrent activity patterns. These results contribute important insights to our long-term objective of developing an accurate model of TEPs that can be used to guide the design and administration of TMS-EEG for excitability mapping in clinical contexts.

References

1\. Ilmoniemi, R. J. & Kicić, D. Brain Topogr. 22, 233–248 (2010).

2\. Massimini, M. et al. Science vol. 309 2228–2232 (2005).

3\. Saturnino, G. B. et al. bioRxiv 500314 (2018) doi:10.1101/500314.

4\. Izhikevich, E. M. & FitzHugh, R. Scholarpedia J. 1, 1349 (2006).

5\. Spiegler, A., et al. eNeuro 3 (2016).

6\. Schirner, M., et al. Neuroimage 117, 343–357 (2015).

7\. Bensaid, S. et al. Frontiers in Systems Neuroscience 13:59 (2019)

Speakers
JG

John Griffiths

Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 13

8:00pm CEST

P75: The covariance perceptron: theory and application
Matthieu Gilson, David Dahmen, Ruben Moreno-Bote, Andrea Insabato, Moritz Helias

  LINK GOOGLE MEET: meet.google.com/ebs-fpcg-chu

Figures on pdf "poster" taken from the latest version of the paper (just accepted): https://www.biorxiv.org/content/10.1101/562546v4

 1) Introduction

Many efforts in the study of the brain have focused on representations of stimuli by neurons and learning thereof. Our work [1] demonstrates the potential of a novel learning paradigm for neuronal activity with high variability, where distributed information is embedded in the correlation patterns.

2) Learning theory

We derive a learning rule to train a network to perform an arbitrary operation on spatio-temporal covariances for time series. To illustrate our scheme we use the example of classification where the network is trained to perform an input-output mapping from given sets of input patterns to representative output patterns, one output per input group. This setup is the same as learning activity patterns for the classical perceptron [2], a central concept that has brought many fruitful theories in the fields of neural coding and learning in networks. For that reason, we refer to our classifier as “covariance perceptron”. Compared to the classical perceptron, a conceptual difference is that we base information on the the co-fluctuations of the input time series that result in second-order statistics. In this way, robust information can be conveyed despite a high apparent variability in the activity. This approach is a radical change of perspective compared to classical approaches that typically transform time series into a succession of static patterns where fluctuations are noise. On the technical ground, our theory relies on the multivariate autoregressive (MAR) dynamics, for which we derive the weight update (a gradient descent) such that input covariance patterns are mapped to given objective output covariance patterns.

3) Application to MNIST database

To further explore its robustness, we apply the covariance perceptron to the recognition of objects that move in the visual field by a network of sensory (input) and downstream (output) neurons. We use the MNIST database of handwritten digits 0 to 4. As illustrated in Fig. 1, the traces “viewed” by an input neuron exhibit large variability across presentations. Because we want to identify both the digit identity and its moving direction, covariances of the input time series are necessary. We show that the proposed learning rule can successfully train the network to perform the classification task and robustly generalize to unseen data.

4) Towards distributed spike-based information processing

We envisage future steps that transpose this work to information conveyed by high-orders in the spike trains, to obtain the supervised equivalent of spike- timing-dependent plasticity (STDP).

References:

[1] M Gilson, D Dahmen, R Moreno-Bote, A Insabato, M Helias (accepted in PLoS Comput Biol) The covariance perceptron: A new framework for classification and processing of time series in recurrent neural networks. bioRxiv https://www.biorxiv.org/content/10.1101/562546v4

[2] CM Bishop (2006) Pattern Recognition and Machine Learning. Springer.

Speakers
avatar for Matthieu Gilson

Matthieu Gilson

post-doc, Jülich Forschunszentrum



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 20

8:00pm CEST

P82: Contrast invariant tuning in primary visual cortex
Zoom:    https://unimelb.zoom.us/j/97678012117?pwd=T2dBNE5BOVBxM3hSYXRUSnF0OXBQdz09 
  Password: 677923

Hamish Meffin
, Ali Almasi, Michael R Ibbotson

Previous studies show that neurons in primary visual cortex (V1) exhibit contrast invariant tuning to the orientation of spatial grating stimuli [1]. Mathematically this is equivalent to saying that their response is a multiplicatively separable function of contrast and orientation.

Here we investigated the contrast dependence of V1 tuning to visual features in a more general framework. We used a data-driven modelling approach [2] to identify the spectrum of spatial features to which individual V1 neurons were sensitive, from our recordings of single unit responses in V1 to white (Gaussian) noise and natural scenes. For each cell we identified between 1 and 5 spatial feature dimensions to which the cell was sensitive (e.g. Fig. 1A, with 2 feature dimensions; feature 1 & 2 as labelled, with red showing bright and blue showing dark regions of the feature). The response of a neuron to its set of features was estimated from the data as the spike rate equal to a function of the individual feature-contrasts :

r = F(c1,…,cK) (Eq. 1)

where c1,…,cK are the contrast levels of a cell’s spatial features, 1,..K, embedded in any stimulus (e.g. Fig. 1B). These features spanned a subspace, giving a spectrum of interpolated features to which the cell was sensitive (Fig. 1A, examples labelled). The identity of these features varied along the angular polar coordinate in this subspace, which we term the feature-phase, φ (Fig. 1A, labelled). In this angular dimension, characteristics of the features, such as their spatial phase, orientation or spatial frequency, were found to vary continuously. In the radial coordinate, the contrast of these features varied, c=||( c1,…,cK) ||(Fig. 1A, labelled).

We found that the neural response above the spontaneous rate, r0, was well approximated by a multiplicatively separable function of the feature-contrast and feature-phase (Fig. 1C):

r = fc(c) fφ(φ) + r0  (Eq.2) To quantify the accuracy of this approximation, we calculated a relative error between the original and separable forms of the feature-contrast response function (i.e. Eq. (1) & (2)). This relative error varied between 2% and 18% across the cell population, with a mean of 6%. This indicates that for most cells, the separable form of the feature-contrast response function was a good approximation.

This result may be interpreted as demonstrating a form of contrast invariant tuning to feature-phase in V1. This tuning to feature-phase is given by the function fφ(φ) (Fig. 1E), and the contrast response function is given by fc(c) (Fig. 1D). As several feature characteristics such as spatial phase, orientation or spatial frequency covary with feature-phase, this also leads to contrast invariant tuning under covariation in these characteristics as feature-phase varies.

**Acknowledgements ** The authors acknowledge the support the Australian Research Council Centre of Excellence for Integrative Brain function (CE140100007), the National Health and Medical Research Council (GNT1106390), and Lions Club of Victoria.

References

1. Alitto, H. J., & Usrey, W. M. (2004). Journal of neurophysiology , 91 (6), 2797-2808.
2. Almasi, A., Meffin, H., Cloherty, S. L., Wong, Y., Yunzab, M., & Ibbotson, M. R. (2020). Mechanisms of feature selectivity and invariance in primary visual cortex. bioRxiv.

Speakers
avatar for Hamish Meffin

Hamish Meffin

Senior Researcher, Department of Biomedical Engineering, The University of Melbourne
Primary interests: Visual neuroscience, data driven modelling, modelling electrical stimulation of neurons



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 14

8:00pm CEST

P86: Studying neural mechanisms in recurrent neural network trained for multitasking depending on a context signal.
Cecilia Jarne

meet link

Most biological brains, as well as artificial neural networks, are capable of performing multiple tasks [1]. The mechanisms through which simultaneous tasks are performed by the same set of units are not yet entirely clear. Such systems can be modular or mixed selective through some variable such as sensory stimulus [2,3]. Based on simple tasks studied in our previous work [4], where tasks consist of the processing of temporal stimuli, we build and analyze a simple model that can perform multiple tasks using a contextual signal. We study various properties of our trained recurrent networks, as well as the response of the network to the damage done in connectivity. In this way we are trying to illuminate those mechanisms similar to those that could occur in biological brains associated with multiple tasks.
We use a simple RNN model with three layers: one is the input, the second is the recurrent hidden layer, and the last is the output layer. We focus on the study of networks trained for processing of stimuli as temporal inputs.This work shows some preliminary results training networks to perform tasks with: (1) one input tasks with one context input: Time reproduction and finite-duration oscillation, Fig. 01. (2) Two input tasks with one context input: basic logic gate operation (AND, OR, XOR), Fig 02.
Preliminary results show that it was successfully used Keras and Tensorflow for training RNN with context multitasking (open source code).More units to perform well than one task training, as expected. Regarding the dynamics still, a small set of eigenvalues ​​remain outside the circle and dominate dynamics, as it was obtained for individual tasks [5]. Fix-point and oscillatory states coexist depending on context and input. The oscillatory state remains in a manifold [6]. About damage, it is possible to remove between 10% and 12% of the lowest connections before the learned task deteriorates.


[1] Guangyu Robert Yang, Madhura R. Joglekar, Francis Song, William T. Newsome Xiao-Jing Wang. Task representations in neural networks trained to perform many cognitive tasks. 2019 Nature Neuroscience 22(2). DOI: 10.1038/s41593-018-0310-2

[2] Guangyu Robert Yang, Michael W Cole and Kanaka Rajan. How to study the neural mechanisms of multiple tasks. Current Opinion in Behavioral Sciences 2019, 29:134–143.
https://doi.org/10.1016/j.cobeha.2019.07.001

[3] Rigotti Mattia, Barak Omri, Warden Melissa R, Wang Xiao-Jing, Daw Nathaniel D, Miller Earl K, and Fusi Stefano. The importance of mixed selectivity in complex cognitive tasks. Nature 2013, 497:585.

[4] C. Jarne, R. Laje. A detailed study of recurrent neural networks used to model tasks in the cerebral cortex. https://arxiv.org/abs/1906.01094v3

[5] C. Jarne. The dynamics of Recurrent Neural Networks trained for temporal tasks and the eigenvalue spectrum. https://arxiv.org/abs/2005.13074

[6] Saurabh Vyas,Matthew D. Golub, David Sussillo and Krishna V. Shenoy Annual Review of Neuroscience. Computation Through Neural Population Dynamics.

Speakers
avatar for Cecilia Jarne

Cecilia Jarne

Researcher and Professor, Departement of Science and Technology, National University of Quilmes and CONICET
My main research area is the study of the dynamical aspects of Recurrent Neuronal Networks trained to perform different bio-inspired tasks and decision making. I study training methods, implementations and how different degrees of damages affect trained networks. My second research... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 13

8:00pm CEST

P8: Average beta burst duration profiles provide a signature of dynamical changes between the ON and OFF medication states in Parkinson's disease
Benoit Duchet, Filippo Ghezzi, Gihan Weerasinghe, Gerd Tinkhauser, Andrea A. Kuhn, Peter Brown, Christian Bick, Rafal Bogacz

Parkinson's disease motor symptoms are associated with an increase in subthalamic nucleus beta band oscillatory power. However, these oscillations are phasic, and a growing body of evidence suggests that beta burst duration may be of critical importance to motor symptoms, making insights into the dynamics of beta bursting generation valuable. In this study, we ask the question “Can average burst duration reveal how dynamics change between the ON and OFF medication states?”. Our analysis of local field potentials from the subthalamic nucleus demonstrates using linear surrogates that the system generating beta oscillations acts in a more non-linear regime OFF medications and that the change in the degree of non-linearity is correlated with motor impairment. We further narrow-down dynamical changes responsible for changes in temporal patterning of beta oscillations between medication states by fitting to data biologically inspired models, and simpler models of the beta envelope. Finally, we show that the non-linearity can be directly extracted from average burst duration profiles under the assumption of constant noise in envelope models, revealing that average burst duration profiles provide a window into burst dynamics, which may underlie the success of burst duration as a biomarker. In summary, we have demonstrated a relationship between average burst duration profiles, dynamics of the system generating beta oscillations, and motor impairment, which puts us in a better position to understand the pathology and improve therapies.

To leave more room for questions and discussions, please have a look at my recorded poster walk-through before joining the session: https://youtu.be/4N04CMKCsaQ
Link to google meet session: https://meet.google.com/jxk-hgiw-okf

Speakers
avatar for Benoit Duchet

Benoit Duchet

Postdoctoral researcher, University of Oxford
My work is focused on studying the neural circuits affected by Parkinson’s disease and essential tremor through mathematical and computational methods, with the aim of improving therapeutic interventions such as deep brain stimulation.Topics I am interested in include dynamical... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 08

8:00pm CEST

P96: Bayesian network change point detection for dynamic functional connectivity
Lingbin Bian, Tiangang Cui, Adeel Razi, Jonathan Keith

We present a novel Bayesian method for identifying the change of dynamic network structure in working memory task fMRI data via model fitness assessment. Specifically, we detect dynamic community structure change- point(s) based on overlapped sliding window applied to multivariate time series. We use the weighted stochastic block model to quantify the likelihood of a network configuration, and develop a novel scoring criterion that we call posterior predictive discrepancy by evaluating the goodness of fit between model and observations within the sliding window. The parameters for this model include latent label vector assigning network nodes to interacting communities, and the block model parameter determining the weighted connectivity within and between communities. The GLM analyses were conducted in both subject level and group level and the contrast between 2-back, 0-back and baseline were used to localise the regions of interest in task fMRI data. The working memory task fMRI data in the HCP were pre-processed and GLM analyses were applied. With the extracted time series of regions of interest, we propose to use the Gaussian latent block model [1], also known as the weighted stochastic block model (WSBM), to quantify the likelihood of a network and Gibbs sampling to sample a posterior distribution derived from this model. The Gibbs sampling approach we adopt is based on the work of [1, 2] for finite mixture models. The proposed model fitness procedure draws parameters from the posterior distribution and uses them to generate a replicated adjacency matrix; then calculates a disagreement matrix to quantify the difference between the replicated adjacency matrix and realised adjacency matrix. For the evaluation of the model fitness, we define a parameter- dependent statistic called the posterior predictive discrepancy (PPD) by averaging the disagreement matrix. Then we compute the cumulative discrepancy energy (CDE) from PPD by applying another sliding window for smoothing and use CDE as a score criterion for change point detection. The CDE increases when change points are contained within the window, and can thus be used to assess whether a statistically significant change point exists within a period of time. We first applied the algorithm to the synthetic data simulated from the Multivariate Gaussian distribution for validation. We visualise the Gibbs iteration of sampled latent labels and the histogram of the block parameters reflecting the characterisation of the connectivity within and between communities. We then demonstrated the performance of the change point detection with different window sizes. In real working memory task fMRI data analyses, the fixed effects analyses are conducted to estimate the average effect size across runs within subjects at the subject level. At group level, the mixed effects analyses are conducted, where the subject effect size is considered to be random. In this work, we mainly focus on the memory load contrast (2-back vs 0-back, 2-back vs baseline, or 0-back vs baseline). References 1\. Wyse J, Friel N. Block clustering with collapsed latent block models. Statistics and computing, 2012, 22, 415-428. 2\. Nobile A, Fearnside A T. Bayesian finite mixtures with an unknown number of components: the allocation sampler. Statistics and Computing, 2007, 17, 147-162.

Speakers
LB

Lingbin Bian

Monash University


Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 20

8:00pm CEST

P9: Introducing EBRAINS: European infrastructure for brain research
Primary room:
https://webconf.fz-juelich.de/b/kli-mvy-8hk   (No password)

Backup:
https://us02web.zoom.us/j/82862618370?pwd=YWhWSU9wWjRQQ0ZZb2tFNEhaaFp1Zz09
Meeting ID: 828 6261 8370   PW: 4nDpPD
 

Wouter Klijn, Sandra Diaz, Sebastian Spreizer, Thomas Lippert, Spiros Athanasiou, Yannis Ioannidis, Abigail Morrison, Evdokia Mailli, Katrin Amunts, Jan Bjaalie
The Human Brain Project (HBP), the ICT-based Flagship project of the EU, is developing EBRAINS - a research infrastructure providing tools and services which can be used to address challenges in brain research and brain-inspired technology development. EBRAINS will allow the creation of the necessary synergy between different national efforts to address one of the most challenging targets of research. The diverse services of the EBRAINS infrastructure will be illustrated with three use cases spanning the immensely diverse neuroscience field: The first case is about Viktoria, a researcher who received a grant to investigate the distribution of interneuron types in the cortex and their activity under specific conditions. She needs a place to store, publish and share the data collected to add to the body of knowledge on the human brain. She contacts the HBP service desk and her case is forwarded to the data curation team, a part of the EBRAINS High Level Support Team. The data curators provide data management support and help make her data FAIR by registering it in the Knowledge Graph [1] and the Brain Atlas [2]. Her data is stored for 10 years, given a DOI to allow citations, and can be used by tools integrated in EBRAINS. The second case is about Johanna, who has developed a software package for the analysis of iEEG data and now wants this tool to be used by as many researchers as possible. She contacts the HBP service desk and is put in contact with the EBRAINS technical coordination team. A co-design process is started together with the co-simulation framework developers, and her software is integrated into the simulation and analysis framework. After integration, Johanna's tool can now be used with experimental data as well as to simulated iEEG data. Her tool is integrated into the operations framework of EBRAINS and is easily deployed on the HPC resources available through EBRAINS. The third use case is about Jim, a neuroscience professor with a strong focus on teaching. After learning about the HBP he explores the EBRAINS website and discovers the wide range of educational tools available. NEST Desktop [3], for instance, is a web accessible interface for spiking neuron networks. It allows the creation of a complete simulation with less than 10 mouse clicks, without the need to install any software. The output of the simulation can be then ported to Jupyter notebooks hosted on the EBRAINS' systems to perform additional analysis. The functionality is accompanied with online MOOCs and detailed documentation to provide him with enough material to fill multiple courses on neuroscience. With the EBRAINS infrastructure the HBP is delivering a set of tools and services in support of all aspects of neuroscience research. Get more information at: www.ebrains.eu or email the service desk at: support@ebrains.eu Acknowledgments This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). References 1\. Amunts, K., et. al. (2019) The Human Brain Project—Synergy between neuroscience, computing, informatics, and brain-inspired technologies. PLoS biology, 17(7) 2\. Bjerke IE, et al. (2018) Data integration through brain atlasing: Human Brain Project tools and strategies. Eur Psychiatry. 50:70–6 3\. https://github.com/babsey/nest-desktop

Speakers
avatar for Wouter Klijn

Wouter Klijn

Team leader, Jülich Supercomputing Centre, Simulation Lab Neuroscience,, Forschungszentrum Jülich
Wouter Klijn completed a MSc in Artificial Intelligence from the University of Groningen in the Netherlands. His Master thesis was on the information content of cell species in a 3 layer model of a cortical micro-column. He currently is a teamleader in the Simlab Neuroscience at the... Read More →



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 04

9:00pm CEST

P103: Constructing model surrogates of populations of dopamine neuron and medium spiny neuron models for understanding phenotypic differences
Tim Rumbell, Sushmita Allam, Tuan Hoang-Trong, Jaimit Parikh, Viatcheslav Gurev, James Kozloski

Neurons of a specific type have intrinsic variety in their electrophysiological properties. Intracellular parameters, such as ion channel conductances and kinetics, also have high variability within a neuron type, yet reliable functions emerge from a wide variety of parameter combinations. Recordings of electrophysiological properties from populations of neurons under different experimental conditions or perturbations produce sub-groups that form “electrophysiological phenotypes”. For example, different properties may derive from wild-type vs. disease model animals or may change across multiple age groups [1].

Populations of neuron models can represent a neuron type by varying parameter sets, each able to produce the outputs of a recording, and all spanning the ranges of recorded features. We previously generated model populations using evolutionary search with an error function that combines soft-thresholding with a crowdedness penalty in feature space, allowing coverage of the empirical range of features with models. The technique was used to generate a population of dopamine neuron (DA) models, which captured the majority of empirical features, generalized to perturbations, and revealed sets of coefficients predicted to reliably modulate activity [2]. We also used this technique to construct striatal medium spiny neuron (MSN) model populations, which recapitulated the effects of extracellular potassium changes [3] and captured differences in electrophysiological phenotype between MSNs from wild- type mice and from the Q175 model of Huntington’s disease. Our approach becomes prohibitively computationally expensive, however, when we seek to produce multiple populations that represent many phenotypes from across a spectrum. For example, to recreate the non-linear developmental trajectory observed across postnatal development of DAs [1] we would need to perform multiple optimizations.

Here we demonstrate the construction of model surrogates that map model parameters to features spanning the range of multiple electrophysiological phenotypes. We sampled from parameter space and simulated models to create a surrogate training set. Using our evolutionary search as prior knowledge of our parameter space enabled a dense sampling in regions of the high- dimensional model parameter space that were likely to produce valid features. We trained a deep neural network with our datasets, producing a surrogate for our model that maps parameter set distributions to output feature distributions. This can be used in place of the neuron model for model sampling, allowing rapid construction of populations of models that match different distributions of features from across multiple phenotypes. We demonstrate this approach using DA developmental age groups and MSN disease progression states as targets, facilitating a mechanistic understanding of parameter modulations that generate differences in phenotypes.

References

1. Dufour, M.A., Woodhouse, A., Amendola, J. et al. (2014) Non-linear developmental trajectory of electrical phenotype in rat substantia nigra pars compacta dopaminergic neurons. eLife. 3:e04059.
2. Rumbell, T.H. and Kozloski, J. (2019) Dimensions of control for subthreshold oscillations and spontaneous firing in dopamine neurons. PLoS Comput. Biol., 15(9): e1007375.
3. Octeau, J.C., Gangwani, M.R., Allam, S.L., et al. (2019) Transient, consequential increases in extracellular potassium ions accompany channelrhodopsin2 excitation. Cell Reports. 27: 2249-2261.

Paper links
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007375
https://www.biorxiv.org/content/10.1101/2020.06.01.128033v1



Google Meet Link
https://meet.google.com/qom-htnd-iaq

Speakers
TR

Tim Rumbell

Healthcare and Life Sciences, IBM Research



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 20

9:00pm CEST

P109: Investigating Water Transport Mechanisms in Astrocytes with High Dimensional Parameter Estimation
Pierre-Louis Gagnon, Kaspar Rothenfusser, Nicolas Doyon, Pierre Marquet, François Laviolette

To join : https://meet.google.com/hha-rzff-pzz

Holographic microscopy allows one to measure subtle volume changes of cells submitted to challenges such as an osmotic shock or sudden increase in extracellular potassium. Interpreting volumetric data however remains a challenge. Specifically, relating the amplitude of volume changes to the biophysical properties of cells such as passive permeability to water or the rate of water transport by cation chloride cotransporter is a difficult but important task. Indeed, mechanisms of volume regulation are key for cell resilience and survival. Experimentally, the second author measured the volume response as well as the change in sodium concentration of astrocytes submitted to bath applied: hypo-osmotic solutions, solutions with high potassium concentration or solutions containing glutamate. Overall, he measured the time course of the response of over 2000 astrocytes. In order to interpret this rich data, we developed a mathematical model based on our biophysical knowledge of astrocytes. This model relates on the one hand the experimental perturbations of the extracellular medium and on the other the properties of the cell such as its various conductances or strengths of transporters to its responses in terms of volume change, changes in ionic concentrations and in membrane potential. Determining the biophysical properties of cells thus boils down to a problem of model calibration. This presentation is mainly focused on the work of the first author who designed and implemented a gradient-based optimization algorithm, to estimate model parameters and find the values of the parameters which best explain the data coming from distinct modalities and astrocytes.

A first computational challenge is to combine data from different modalities. In some experiments, the sodium response is measured while in others, the volume response is inferred from phase measurements. We also take advantage of the fact that expert knowledge provides information on variables which are not measured. For example, even if membrane potential is not measured, we impose that it is between -100 mV and -50 mV at equilibrium. Combining these different information sources translate into a complex loss function. Furthermore, using a priori knowledge on the value of parameters, we developed a Bayesian approach. Another challenge comes from the fact that different measurements come from different cells. Our goal is thus not to infer a single set of parameters but rather to infer how biophysical parameters are distributed within the population of cells. This was achieved by using a Tikhonov approach which penalizes parameter values laying far from the average of the distribution.

With our algorithm, we were able to infer the strength of the sodium potassium ATPase pump in each cell with a good precision. This could be useful in identifying cells which are more vulnerable. Parameters related to water transport such the passive membrane permeability to water or the rate of water transport through cation chloride cotransporters are elusive and cannot be determined by conventional methods. Our inference algorithms provided information on these values. Finally, our algorithm is flexible enough to adapt rapidly to take advantage of new experiment type or new data modality.

Speakers
avatar for Pierre-Louis Gagnon

Pierre-Louis Gagnon

PhD Candidate, Département d'Informatique et Génie Logiciel, Université Laval



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 01

9:00pm CEST

P111: The impact of noise on the temporal patterning of neural synchronization
Zoom link: https://iu.zoom.us/j/97589644080


Leonid Rubchinsky
, Joel Zirkle

**The impact of noise on the temporal patterning of neural synchronization**

**Joel Zirkle 1, Leonid Rubchinsky1,2**

1 Department of Mathematical Sciences, Indiana University Purdue University Indianapolis, Indianapolis, IN 46032, USA

2 Stark Neurosciences Research Institute, Indiana University School of Medicine, Indianapolis, IN 46032, USA

E-mail: lrubchin@iupui.edu

Neural synchrony in the brain is often present in an intermittent fashion, i.e. there are intervals of synchronized activity interspersed with intervals of desynchronized activity. A series of experimental studies showed that the temporal patterning of neural synchronization may be very specific exhibiting predominantly short (although potentially numerous) desynchronized episodes [1] and may be correlated with behavior (even if the average synchrony strength is not changed) [2,3,4]. Prior computational neuroscience research showed that a network with many short desynchronized intervals may be functionally different than a network with few long desynchronized intervals [5]. In this study, we investigated the effect of noise on the temporal patterns of synchronization. We employed a simple network of two conductance- based neurons that were mutually connected via excitatory synapses. The resulting dynamics of the network was studied using the same time-series analysis methods used in prior experimental and computational studies. It has been well known that synchrony strength degrades with noise. We found that noise also affects the temporal patterning of synchrony. Increase in the noise level promotes dynamics with predominantly short desynchronizations. Thus, noise may be one of the mechanisms contributing to the short desynchronization dynamics observed in multiple experimental studies.

Acknowledgement

This work was supported by NSF grant DMS 1813819.

References

1\. Ahn S, Rubchinsky LL. Short desynchronization episodes prevail in synchronous dynamics of human brain rhythms. Chaos. 2013, 23, 013138.

2\. Ahn S, Rubchinsky LL, Lapish CC. Dynamical reorganization of synchronous

activity patterns in prefrontal cortex - hippocampus networks during behavioral

sensitization. Cerebral Cortex 2014, 24, 2553-2561.

3\. Ahn S, Zauber SE, Worth RM, Witt T, Rubchinsky LL. Neural synchronization: average strength vs. temporal patterning. Clinical Neurophysiology 2018, 129, 842-844.

4\. Malaia E, Ahn S, Rubchinsky LL. Dysregulation of temporal dynamics of

synchronous neural activity in adolescents on autism spectrum. Autism Research 2020, 13, 24-

31.

5\. Ahn S, Rubchinsky LL. Potential mechanisms and functions of intermittent neural synchronization. Frontiers in Computational Neuroscience 2017, 11, 44.

Speakers
avatar for Leonid Rubchinsky

Leonid Rubchinsky

Professor, Department of Mathematical Sciences and Stark Neurosciences Research Institute, Indiana University Purdue University Ind



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 02

9:00pm CEST

P116: Astrocyte simulations with realistic morphologies reveal diverse calcium transients in soma and processes
Zoom: https://tuni.zoom.us/j/63203945445


Ippa Seppala
, Laura Keto, Iiro Ahokainen, Nanna Förster, Tiina Manninen, Marja-Leena Linne

The role of astroglia has long been overlooked in the field of computational neuroscience. Lately their involvement in multiple higher-level brain functions, including neurotransmission, plasticity, memory, and neurological disorders, has been found to be more significant than previously thought. It has been hypothesised that astrocytes fundamentally affect the information processing power of the mammalian brain. As the glia to neuron ratio increases when moving from simpler organisms to those more complex, it is clear that more attention should be directed to glial involvement. Despite the recent advances in neuroglial research there still exists a lack of glia-specific computational tools. Astroglia differ considerably from neurons in their morphology as well as their biophysical functions [1], making it difficult to acquire reliable simulation results with simulators made for studying neuronal behaviour. As the differences in cellular dynamics of astrocytes compared to those of neurons are significant, there clearly exists a need for tailored methods for simulating the behaviour of glial cells.

One such astrocyte specific simulator has been developed [2]. In simulations ASTRO uses MATLAB and NEURON environments [3] and is capable of representing various biologically relevant astroglial mechanisms such as calcium waves and diffusion. In this work we used ASTRO to simulate several astrocytic functions with the help of existing in vivo morphologies from various brain areas. We concentrated on calcium transients, as calcium-mediated signaling is thought to be the main mechanism of intra- and intercellular messaging between astroglia and other neural cell types. The time-scales of these calcium- mediated events have recently been shown to differ considerably in different spatial locations of astrocytes. We were able to reproduce these results in silico by simulating a morphologically detailed computational model that we developed based on previous work [4,5]. This was partly due to ASTRO’s capability to analyse the microscopic calcium dynamics in fine processes, branches and leaves.

With our model ASTRO proved to be a promising tool in simulating astrocytic functions and could potentially offer novel insights to glia-neuron interactions also in future work.

Acknowledgements: The work was supported by Academy of Finland through grants (297893, 326494, 326495) and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2).

References:

1 Calì C, Agus M, Kare K, Boges DJ, Lehväslaiho H, Hadwiger M, Magistretti PJ. 3D cellular reconstruction of cortical glia and parenchymal morphometric analysis from Serial Block-Face Electron Microscopy of juvenile rat. Progress in Neurobiology 2019 Sep 21; 183: 101696.

2 Savtchenko LP, Bard L, Jensen TP, Reynolds JP, Kraev I, Medvedev N, Stewart MG, Henneberger C, Rusakov DA. Disentangling astroglial physiology with a realistic cell model in silico. Nature Communications 2018 Sep 3; 9(1), pp.1-15.

3 Carnevale T, Hines M. The NEURON Book. Cambridge University Press, Cambridge, UK; 2016.

4 Manninen T, Havela R, Linne M-L. Reproducibility and comparability of computational models for astrocyte calcium excitability. Frontiers in Neuroinformatics 2017 Feb 21;11:11.

5 Manninen T, Havela R, Linne M-L. Computational models for calcium-mediated astrocyte functions. Frontiers in Computational Neuroscience 2018 Apr 4;12:14.

Speakers
avatar for Ippa Seppala

Ippa Seppala

Medicine and Health Sciences, Tampere University



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 08

9:00pm CEST

P117: The interplay of neural mechanisms regulates spontaneous cortical network activity: Inferring the role of key mechanisms using data-driven modeling
Jugoslava Acimovic, Tiina Manninen, Heidi Teppola, Marja-Leena Linne

*************************************************
Link to Zoom for poster presentation: 

https://tuni.zoom.us/j/2645809088

*************************************************

In isolated neural systems devoid of external stimuli, the exchange between neuronal, synaptic and putatively also glial mechanisms gives rise to spontaneous self-sustained synchronous activity. This phenomenon has been extensively documented in dissociated cortical cultures in vitro that are routinely used to study neural mechanisms in health and disease. We examine these mechanisms using a new data-driven computational modeling approach. The approach integrates standard spiking network models, non-standard glial mechanisms and network-level experimental data.

The experimental data represents spontaneous activity in dissociated rat cortical cultures recorded using microelectrode arrays. The recordings were performed under several experimental protocols that involved pharmacological manipulation of network activity. Under each protocol the activity exhibited characteristic network bursts, the short intervals (100ms to 1s) of intensive network-wide spiking interleaved by longer (~10s) periods of sparse uncorrelated spikes. The data was analysed to extract, among other properties, duration, intensity and frequency of burst events [1].

The computational model incorporates fast burst propagation and decay mechanisms, as well as the slower burst initiation mechanisms. We first constructed the fast part of the model as a generic spiking neuronal network and optimized it to the experimental data describing intra-burst properties. We developed a model fitting routine relying on multi-objective optimization [2]. The optimized ‘fast’ model was then extended with a selected astrocytic mechanism operating on a similar time-scale as the network burst initiation [3]. Typically, the burst initiation is attributed to a combination of noisy inputs and the dynamics of neuronal (and synaptic) adaptation currents. While noise provides necessary depolarization of cell membrane the adaptation currents prohibit fast initiation of the next burst event. The noise might account for the randomness in ion channel opening and closing, the spontaneous synaptic release and other sources of randomness. The adaptation accounts for the kinetics of various ion channels. We explore the role of a non-standard deterministic mechanism introduced through slow inward current from astrocytes to neurons.

We demonstrate that the fast neuronal part of the model successfully reproduces intra-burst dynamics, including the duration and intensity of network bursts. The model is flexible enough to account for several experimental conditions. Coupled to the slower astrocyte-neuron interaction mechanism the system becomes capable of generating bursts with the frequency proportional to the one seen in vitro.

[1] Teppola H, Aćimović J, Linne M-L (2019) Front Cell Neurosci, 13(377):1-22

[2] Aćimović J, Teppola H, Mäki-Marttunen T, Linne M-L (2018) BMC Neurosci 19(2):68-69

[3] Aćimović J, Manninen T, Teppola H, van Albada S, Diesmann M, Linne M-L (2019) BMC Neurosci, 20(1):97

Acknowledgements: This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). The funding has also been received from the Academy of Finland through grants No. 297893, 326494, 326495.



Speakers
avatar for Jugoslava Acimovic

Jugoslava Acimovic

Senior researcher, Faculty of Medicine and Health Technology, Tampere University



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 16

9:00pm CEST

P126: Topological Byesian Signal Processing with Application to EEG
Christopher Oballe, Alan Czerne, David Boothe, Scott Kerick, Piotr Franaszczuk, Vasileios Maroulas

Electroencephalography is a neuroimaging technique that works by monitoring electrical activity in the brain. Electrodes are placed on the scalp and local changes in voltage are measured over time to produce a collection of time series known as electroencephalograms (EEGs). Traditional signal processing metrics, such as power spectral densities (PSDs), are generally used to analyze EEG since frequency content of EEG is associated with different brain states. Conventionally, PSD estimates are obtained via discrete Fourier transforms. While this method effectively detects low-frequency components because of their high powers, high-frequency activity may go unnoticed because of its relatively weaker power. We employ a topological Bayesian approach that successfully captures even these low-power, high-frequency components of EEG.

Topological data analysis encompasses a broad set of techniques that investigate the shape of data. One of the predominant tools in topological data analysis is persistent homology, which creates topological descriptors called persistence diagrams from datasets. In particular, persistent homology offers a novel technique for time series analysis. To motivate our use of persistent homology to study frequency content of signals, we establish explicit links between features of persistence diagrams, like cardinality and spatial distributions of points, to those of the Fourier series of deterministic signals, specifically the location of peaks and their relative powers. The topological Bayesian approach allows for quantification of these cardinality and spatial distributions by modelling persistence diagrams as marked Poisson point processes.

We test our Bayesian topological method to classify synthetic EEG. We employ three common classifiers: linear regression and support vector machines with linear and radial kernels, respectively. We simulate synthetic EEG with an autoregressive (AR) model, which works by recasting a standard AR model as linearly filtered white noise, enabling straightforward computation of PSDs. The AR model allows us to control the location and width of peaks in PSDs. With this model in hand, we create five classes of signals with peaks in their PSDs at zero to simulate the approximate 1/f behavior of EEG PSDs, four of which also have oscillatory components at 6 Hz (theta), 10 Hz (alpha), 14 Hz (low beta), and 21 Hz (high beta); the fifth class (null) lacks any such component. We repeat this process for two different widths of peaks, narrow (4 Hz) and wide (32 Hz). With data in hand, we extract features using periodograms, persistence diagrams, and our Bayesian topological method, then independently use these features in classification for the wide and narrow width cases. Preliminarily, while both the Bayesian topological method and periodogram features obtain near perfect for the narrow peak case, the Bayesian topological method outperforms the periodogram features over all tested classifiers in the wide peak case.

****_https://meet.google.com/ehv-ycmb-ytv___~~~~


Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 01

9:00pm CEST

P127: A Quantification of Cross-Frequency Coupling via Topological Methods
***Google Meet***
https://meet.google.com/tte-bmdv-jck

Alan Cherne
, Christopher Oballe, David Boothe, Melvin Felton, Piotr Franaszczuk, Vasileos Maroulas

A key feature in electroencephalograms (EEG) is the existence of distinct oscillatory components – theta (4-7 Hz), alpha (8-13Hz), beta (14-30Hz), and gamma (40-100Hz). Cross frequency coupling has been observed between these frequency bands in both the local field potential (LFP) and electroencephalogram (EEG). While the association between activity in distinct oscillatory frequencies and important brain functions is well established, the functional role of Cross frequency coupling is poorly characterized, but has been hypothesized to underlie cortical functions like working memory, learning, and computation [1,2]. The most common form of cross-frequency coupling observed in brain activity recordings is the modulation of the amplitude of a higher frequency oscillation by the phase of a lower frequency oscillation, a phenomenon known as phase-amplitude coupling (PAC). We present a method for detecting PAC in signals that avoids some pitfalls in existing methods and combines techniques developed in the field of topological data analysis (TDA). When analyzing data using TDA, an object called a persistence diagram (Fig.1d), is commonly constructed. In the case of time series the persistence diagram that is generated represents compactly all the peaks and valleys that occur in the signal. We inspect the persistence diagrams to detect the presence of phase- amplitude coupling using the intuition that PAC will impart asymmetry to the upper and lower segments of the diagram. This representation of the data has the advantage that it does not require the choice of Fourier analysis parameters, binning sizes, and phase estimations that are necessary in current methods [3]. We test the performance of our metric on two kinds of synthetic signals (Fig.1a), the first is a phenomenological model with varying levels of phase- amplitude coupling [4] as defined by the Kullback-Liebler divergence from the uniform case of signals with no PAC (Fig.1b-c). The second is from simulated single cell neuronal data based on a layer 5 pyramidal cell [5,6]. Finally, we benchmark this method against methods explored previously [4] in EEG data recorded from human subjects. 1\. VanRullen R, Koch C. Is perception discrete or continuous? Trends Cogn Sci. 2003, 7(5), 207-213. 2\. Canolty RT, Knight RT. The functional role of cross-frequency coupling. Trends Cogn Sci. 2010, 14(11), 507-515. 3\. Cohen MX. Assesing transient cross-frequency coupling in EEG data. J Neurosci Meth. 2008, 168, 494-499. 4\. Tort ABL, Komorowski R, Eichenbaum H, Kopell N. Measuring Phase-Amplitude Coupling Between Neural Oscillations of Different Frequencies. J. Neurophysiol. 2010, 104, 1195-210. 5\. Felton MA Jr, Yu AB, Boothe DL, Oie KS, Franaszczuk PJ. Resonance Analysis as a Tool for Characterizing Functional Division of Layer 5 Pyramidal Neurons. Front Comput Neurosci 2018, May 3. 6\. Traub RD, Contreras D, Cunningham MO, et al. Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts. J Neurophysiol, 2005, 93(4), 2194-232.

***Google Meet***
https://meet.google.com/tte-bmdv-jck

Speakers
avatar for Alan Cherne

Alan Cherne

Graduate Student, University of Tennessee



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 13

9:00pm CEST

P134: A fluid hierarchy in multisensory integration properties of large-scale cortical networks
Zoom link: https://uva-live.zoom.us/j/99497243655

Ronaldo Nunes, Marcelo Reyes, Raphael de Camargo, Jorge Mejias
A fundamental ingredient for perception is the integration of information from different sensory modalities. This process, known as multisensory integration (MSI), has been studied extensively using animal and computational models [1]. It is not yet clear, however, how different brain areas contribute to MSI, and identifying relevant areas remains challenging. Part of the reason is that simultaneous electrophysiological recordings from different brain areas has developed only recently [2], and the intensity, noise profile and delay responses are diverse for different sensory signals [1]. Furthermore, computational models have traditionally focused only on a few areas, a limitation imposed by the lack of reliable anatomical data on brain networks.

We present here a theoretical and computational study of the mechanisms underlying MSI in the mouse brain, by constraining our model with a recently acquired anatomical brain connectivity dataset [3]. Our simulations of the resulting large-scale cortical network reveal the existence of a hierarchy of crossmodal excitability properties, with areas at the top of the hierarchy being the best candidates for integrating information from multiple modalities. Furthermore, our model predicts that the position of a given area in such hierarchy is highly fluid and depends on the strength of the sensory input received by the network. For example, we observe that the particular set of areas integrating visuotactile stimuli changes depending on the level of visual contrast. By simulating a simplified network model and developing its corresponding mean-field approximation, we determine that the origin of such hierarchical dynamics is the structural heterogeneity of the network, which is a salient property of cortical networks [3, 4]. Finally, we extend our results to macaque cortical networks [5] to show that the hierarchy of crossmodal excitability is also present in other mammals, and we characterize how frequency-specific interactions are affected by hierarchical dynamics and define functional connectivity [6]. Our work provides a compelling explanation as of why is it not possible to identify unique MSI areas even for a well- defined multisensory task, and suggests that MSI circuits are highly context- dependent.

Acknowledgements

This study was financed in part by the University of Amsterdam and the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior Brasil (Capes) - Finance Code 001.

References

1) Chandrasekaran C. Computational principles and models of multisensory integration. Curr. Op. Neurobiol. 2017, 43, 25-34.

2) Hong G, Lieber CM. Novel electrode technologies for neural recordings. Nat. Rev. Neurosci. 2019, 20, 330-45.

3) Gamanut R et al. The mouse cortical connectome, characterized by an ultra-dense cortical graph, maintains specificity by distinct connectivity profiles. Neuron 2018, 97, 698-715.

4) Mejias JF, Longtin A. Differential effects of excitatory and inhibitory heterogeneity on the gain and asynchronous state of sparse cortical networks. Front. Comput. Neurosci. 2014, 8, 107.

5) Mejias JF, Murray JD, Kennedy H, Wang XJ. Feedforward and feedback frequency-dependent interactions in a large-scale laminar network of the primate cortex. Sci. Adv. 2016, 2:e1601335.

6) Nunes RV, Reyes MB, De Camargo RY. Evaluation of connectivity estimates using spiking neuronal network models. Biol. Cybern. 2019, 113, 309-320.

Speakers
avatar for Jorge Mejias

Jorge Mejias

Swammerdam Institute for Life Sciences, University of Amsterdam


Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 19

9:00pm CEST

P140: How hard are NP-complete problems for humans?

Zoom Link: https://unimelb.zoom.us/j/99879943198?pwd=K1FSK1d4V1dYYXYyT1c1ZUIwWlBVUT09
Password: 105270

Poster link: https://prezi.com/view/TcBGr7hdgut8rgw7blAQ

Pablo Franco, Karlo Doroc, Nitin Yadav, Peter Bossaerts, Carsten Murawski

It is widely accepted that humans have limited cognitive resources and that these finite resources impose restrictions on what the brain can compute. Although endowed with limited computational power, humans are still presented daily with decisions that require solving complex problems. This raises a tension between computational capacity and the computational requirements of solving a problem. In order to understand how hardness of problems affect problem-solving ability we propose a measure to quantify the difficulty of problems for humans. For this we make use of computational complexity theory, a widely studied theory used to quantify the hardness of problems for electronic computers. It has been proposed that computational complexity theory can be applied to humans, but it remains an open empirical question whether this is the case.


We study how difficulty of problems affects decision quality in complex problems by studying a measure of expected difficulty over random instances (i.e. random cases) of a problem. This measure, which we refer to as instance complexity (IC), quantifies the expected hardness of a decision problems; that is, problems that have a yes/no answer. More specifically, this measure captures how constrained the problem is, based on a small number of features of the instance. Overall, IC has three main advantages. Firstly, it is a well- studied measure that has been proven to be applicable to a large range of problems for electronic computers. Secondly, it allows calculation of expected hardness of a problem ex-ante, that is, before solving the problem. And lastly, it captures complexity that is independent of a particular algorithm or model of computation. Thus, it is considered to characterize the inherent computational complexity of random instances, which is independent of the system solving it.

In this study we test whether IC is a generalizable measure, for humans, of the expected hardness of solving a problem. For this purpose, we ran a set of experiments in which human participants solved a set of instances of one of three widely studied NP-Complete problems, namely the Traveling Salesperson, the Knapsack Problem or Boolean Satisfiability. Instances varied in their IC. We show that participants expended more effort on instances with higher IC, but that decision quality was lower in those instances. Together, our results suggest that IC can be used to measure the expected computational requirements of solving random instances of a problem, based on an instance’s features.

The findings of this study speak to the broader question of whether there is a link between the computation model in humans and electronic computers. Specifically, this study gives evidence that the average hardness of random instances can be characterized via the same set of parameters for both computing systems. This provides support that computational complexity theory applies to humans. Moreover, we argue that decision-makers could use IC to estimate the expected costs of performing a task. One reason is that the estimation of IC can be done without having to solve the problem. Furthermore, the results of this study suggest that IC captures the hardness of a random instance. Most importantly, our findings suggest that people modulate their effort according to IC. Altogether, this generates future avenues for research, based on IC, that could shed light into the cognitive resource allocation process in the brain.

Speakers
avatar for Pablo Franco

Pablo Franco

PhD candidate, University of Melbourne
Human Decision-Making, Computational Complexity, Neuroimaging (fMRI), Open Science, R....


Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 17

9:00pm CEST

P143: Biophysically-detailed multiscale model of macaque auditory thalamocortical circuits reproduces physiological oscillations
Link to Google Meet: https://meet.google.com/gjo-tdks-ufa

Salvador Dura-Bernal, Erica Y Griffith, Annamaria Barczak, Monica N O’Connell, Tammy McGinnis, Haroon Anwar, William W Lytton, Peter Lakatos, Samuel A. Neymotin

We used the NEURON simulator with NetPyNE to develop a biophysically-detailed model of the macaque auditory thalamocortical system. We simulated a cortical column with a cortical depth of 2000um and 200um diameter, containing over 12k neurons and 30M synapses. Neuron densities, laminar locations, classes, morphology and biophysics, and connectivity at the long-range, local and dendritic scale were derived from published experimental data. We used the model to investigate the mechanisms and function of neuronal oscillatory patterns observed in the auditory system in electrophysiological data recorded simultaneously from nonhuman primate primary auditory cortex (A1) and the medial geniculate body (MGB), while the awake subjects were presented with different classes of auditory stimuli, including speech.

The model A1 includes 6 cortical layers and multiple populations of neurons consisting of 4 excitatory (intratelencephalic (IT), spiny stellate (ITS), pyramidal-tract (PT), and corticothalamic (CT)), and 4 inhibitory types (somatostatin (SOM), parvalbumin (PV), vasoactive intestinal peptide (VIP), and neurogliaform (NGF)). Cells were distributed across layer 2-6, except NGF cells which were also included in L1, as these have been identified as important targets of the thalamic matrix. The A1 model was reciprocally connected to the thalamic model to mimic anatomically verified connectivity. The thalamic model included the medial geniculate body (MGB) and the thalamic reticular nucleus (TRN). MGB includes core and matrix populations of thalamocortical (TC) neurons with distinct projection patterns to different layers of A1, and thalamic interneurons (TI) projecting locally. TRN included thalamic reticular neurons (RE) primarily inhibiting MGB.

Thalamocortical neurons were driven by artificial spike generators simulating background inputs from non-modeled brain regions. Auditory stimulus related inputs were simulated using phenomenological models of the cochlear auditory nerve and the inferior colliculus (IC) that captured the main physiological transformations occurring in these regions. The output of the IC model was then used to drive the thalamocortical populations. This allowed us to provide any arbitrary sound as input to the model, including those used during our macaque in vivo experiments, thus facilitating matching model to data.

We used evolutionary algorithms to tune the network to generate experimentally-constrained firing rates for each of the 42 neural populations. We tuned 12 high-level connectivity parameters, including background input and E->E, E->I, I->E, I->I weight gains, within parameter value ranges constrained biologically. Each simulated second required approximately 1 hour on 96 supercomputer cores. For the evolutionary optimization we ran 100 simultaneous simulations (9,600 cores) every generation. To the best of our knowledge, this is the first time evolutionary optimization has been successfully used for large-scale biophysically-detailed network models.

We will use our model to determine mechanistic origins of spatiotemporal neuronal oscillatory patterns observed in vivo using an iterative modeling data-analysis process . At the end of the process, to confirm model predictions, we will use targeted deep brain electrical microstimulation and pharmacological manipulations.

Funding: NIH NIDCD R01DC012947, U24EB028998, U01EB017695, DOH01-C32250GG-3450000, Army Research Office W911NF-19-1-0402

Speakers
avatar for Salvador Dura-Bernal

Salvador Dura-Bernal

Assistant Professor, State University of New York (SUNY) Downstate



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 10

9:00pm CEST

P152: Face-selective neurons arise spontaneously in untrained deep neural networks
Please attend to following Google Meeting URL:
https://meet.google.com/fbv-gutq-vnr

Min Song
, Seungdae Baek, Jaeson Jang, Gwangsu Kim, Se-Bum Paik

In the primate brain, the neurons that selectively respond to faces are observed and considered as the basis of face recognition [1]. Although such face-selective neurons are observed in infant animals [2], the origin of face- selectivity is still under debate, because conflicting findings have raised questions whether this neuronal selectivity can arise spontaneously [3], or requires training from visual experience [2]. Here, we show that face- selective neurons can spontaneously arise in untrained deep neural networks (DNN), together with the previous notion that DNN could be considered as a visual cortex model that can perform human-level visual function and predict neuronal responses. Using biologically-inspired neural networks, AlexNet, we measured responses of the last convolutional layer to the image sets of face and 15 non-face classes. We found that face-selective neurons arise in untrained AlexNet with randomly permuted weights, where the face-selective neuron was defined as a neuron that showed a significantly higher response to face images compared to non-face images. To qualitatively examine the feature- selective response of these face-selective neurons, we reconstructed the preferred feature images of individual neurons using the reverse correlation method. We found face-components, such as eyes, nose, and mouth, in preferred feature images of face-selective neurons whereas no noticeable shape was found in neurons with no selectivity. Next, to test whether the selective response of these neurons could provide sufficient information to classify a face from other objects, we trained a support vector machine (SVM) to classify whether the given image was a face using neural responses of the untrained network. As a result, the SVM trained with only face-selective neurons shows significantly better performance than that trained with neurons with no selectivity. Next, to examine whether the face-selective neurons show view-point invariant characteristics observed in monkeys, we measured the responses of the permuted AlexNet while face images from five different angles were provided to the network. Surprisingly, the face-selective neurons in the network show viewpoint invariant responses and their level of invariance increased along the network hierarchy in the permuted AlexNet, similar to that in monkey IT. Lastly, to examine the origin of face-selectivity in untrained neural networks, we implemented a randomly initialized network where values in each weight kernel were randomly drawn from a weight distribution of the pre- trained AlexNet. We found that the number of face-selective neurons abruptly decreases when the weight variation is reduced to 52% of that in the pre- trained network. These results suggest that statistical variation present in the random feedforward projections could solely drive the emergence of innate face-selective neurons in the visual system. Overall, our findings provide insight into the origin of cognitive functions in both artificial and biological neural networks.

[1] Tsao, D. Y., Freiwald, W. A., Tootell, R. B. H. & Livingstone, M. S. A cortical region consisting entirely of face-selective cells. Science (2006) 311, 670–674

[2] Livingstone, M. S. et al. Development of the macaque face-patch system. Nat. Commun. (2017) 8

[3] Deen, B. et al. Organization of high-level visual cortex in human infants. Nat. Commun. (2017) 8

This work was supported by Grant Number: (2019M3E5D2A01058328, 2019R1A2C4069863)

Speakers
avatar for Min Song

Min Song

PhD Student, Bio and Brain engineering, KAIST



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 07

9:00pm CEST

P153: Number-selective units can spontaneously arise in untrained deep neural networks
Link : https://meet.google.com/kym-ugkp-ghx
jaesonjang@kaist.ac.kr

Jaeson Jang
, Gwangsu Kim, Seungdae Baek, Min Song, Se-Bum Paik

Number sense is an ability to estimate number of visual items (numerosity) without counting, which is observed in newborn animals of various species. In single-neuron recordings in numerically naïve monkeys, it was observed that individual neurons can respond selectively to the numerosity [1]. This suggests that number-selective neurons spontaneously arise for a foundation of innate number sense, but it remains unclear how these neurons originate in the absence of learning. Here, using a deep neural network (DNN) designed from the structure of a visual pathway (AlexNet), we show that number tuning of network units can spontaneously arise in untrained networks, even in the absence of any learning. To devise an untrained network, we randomly permuted the weights of filters in each convolutional layer of the pre-trained AlexNet and examined the response to images of dot patterns representing numbers from 1 to 30. For stimuli, we used three different sets to ensure invariance of the number tuning for certain geometric factors (stimulus size, density, and area). A network unit was considered to be number-selective if its response significantly changes across the numerosity (p < 0.01, two-way ANOVA) but there is no significant effect for the stimulus set or interaction between two factors (p > 0.01). Importantly, number-selective units were observed in the permuted AlexNet (9.58% of units in the last convolutional layer), even though the network was never trained for any task after being permuted. Observed number-selective units followed the Weber-Fechner law observed in the brain, where the width of the tuning curves increases proportionally in the numerosity. We also showed that these units enable the network to perform a number discrimination task, by training a support vector machine (SVM) to compare numerosities in two different images using the response of number- selective units. Next, to explain how number-selective units emerge in permuted networks, we hypothesized that the number tuning to various numerosities can be initiated from the monotonic unit activities in the earlier layer, the response of which monotonically decreases or increases as the given numerosity increases. To test this idea, we performed a model simulation for the randomly weighed summation of tuning curves of increasing and decreasing activities and confirmed that tuning to all the tested numerosities was successfully generated. Notably, the curve tuned to smaller numbers was generated by the summation of strongly weighted decreasing activities and weakly weighted increasing activities. As expected, in the permuted AlexNet, we observed that number-selective units tuned to smaller numbers receive strong inputs from the decreasing units and vice versa. These results suggest that number-tuned neurons may spontaneously arise from the statistical variation of feedforward projections in the visual pathway during the early development stage. This finding provides new insights into the origin of cognitive functions in biological brains, as well as in artificial neural networks.

Acknowledgments

We thank the National Research Foundation of Korea (NRF) for supporting Se-Bum Paik with 2019R1A2C4069863 and 2019M3E5D2A01058328.

References

1. Viswanathan P & Nieder A. Neuronal correlates of a visual ‘sense of number’ in primate parietal and prefrontal cortices. Proc. Natl. Acad. Sci. 2013, 110, 11187–11192.

Speakers
avatar for Jaeson Jang

Jaeson Jang

Graduate student, Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 13

9:00pm CEST

P154: Rhythmic eye movement predicts active perception of ambiguous visual stimulus
See Presentation at (9PM ~ 10PM)
https://meet.google.com/fao-zkqw-fea


Woochul Choi
, Hyeonsu Lee, Se-Bum Paik

When an ambiguous sensory stimulus is given, our brain often actively interprets the given stimulus to dissolve ambiguity. A particular example is the condition of “bistable perception” where a given stimulus can be interpreted as two different states: Under this ambiguity, our perception alternates between two possible interpretations quasi-periodically with switching frequency varying across individuals. This characteristic dynamics of bistable perception is thought to reveal how the brain recognizes incomplete visual signals to lead to a perceptual decision, and a number of studies have been performed to investigate the mechanism of its rhythmic perceptual alternation. However, understanding the dynamics of bistable perception has proved elusive, as it is a complicated process involving interrelated cognitive and motor processes even including top-down intention and eye movements. Recent studies reported that specific eye movement occurs during bistable perception [1], but it is still not known whether eye movements can actively induce perceptual decision, or they are just accompanied after the decision. Here, we show that eye movement may not solely induce perceptual behavior, but the eye movement patterns reflect the perceptual decisions for interpretation of ambiguous stimuli. We performed a human psychophysics experiment with simultaneous eye-tracking, using three bistable stimuli — racetrack, rotating cylinder, and Necker cube. We found that eye gaze slowly oscillates with 5-10s intervals, the period of which was positively correlated to the frequency of perceptual switch. In addition, we found that eye gaze movements were observed in the opposite directions before two different perceptual decisions are made. The preceding eye gaze can thus predict the perceptual decision with ~90% accuracy. We also found that the frequency of the saccadic eye movement during free viewing, which does not require any active interpretation, was correlated with the period of perceptual switch, implying that dynamics of eye movement reflects the characteristic of bistable perception. Next, to isolate the effect of eye movement from intention, we first asked the subjects to have a strong intention to switch (or stay) their perceived state during experiments. With such manipulations, we found that both perceptual decision and eye movements were significantly altered, compared to the case of non-intended trials. We then controlled visual stimuli so that the subject’s eye movement follows the traces of intention-controlled trials, without actual intention to change their behavior. Under this condition, even though subjects’ eye movements mimic those of the intended trial, perceptual decisions were not significantly biased. This suggests that eye movements alone cannot bias perceptual behavior in bistable perception. Taken together, the results suggest that 1) rhythmic eye movement correlates with active visual perception, 2) preceding eye gaze trajectory predicts individual decision but 3) eye movement may not solely induce perceptual decision. These results collectively suggest a relationship between eye movement control, top-down intention, and active perception.

Acknowledgements

This research was supported by National Research Foundation of Korea, Grant Number: 2019M3E5D2A01058328, 2019R1A2C4069863

References

1\. Polgári P et al.,Novel method to measure temporal windows based on eye movements during viewing of the Necker cube. PLoS One.2020(15)

Speakers
WC

Woochul Choi

Korea Advanced Institute of Science and Technology



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 19

9:00pm CEST

P161: Less is more: a new computational approach for analyzing EEG for brain computer interface applications

Meeting ID: 97860059707

Password: ****** (Click the link below to find out more)


Session Link: Less is More


Brain computer interfaces (BCI) are amongst the exhilarating applications of computational neuroscience and have been increasingly the focus of research around the world. Currently, abundant effort in BCI research is devoted to offline analysis of recorded data to achieve higher accuracies in decoding [1]. Although this has led to the development of new methods and algorithms, the problem of online decoding of subject intentions remains a challenging one [2]. One of the restrictive steps of BCI design is the need for complex preprocessing steps required to extract features from the recorded signals to be used by the classifiers to distinguish intentions of the subject [1]. The other hindering factor is the variability of the recorded signals. EEG recordings use between 20-128 electrodes at sampling rates of 250 up to 1KHz for BCI applications. This data is recorded from the whole brain and in/between subject(s) variability intensifies the problem even further. This problem is currently mitigated through manual and careful feature engineering steps and tweaking of classifier parameters.

We are proposing to reduce the complexity of the architecture by 1)using only raw recorded signals with no preprocessing, 2)reducing the number of channels used for classification and 3) a single convolutional neural network (CNN) to be used for classification amongst all subjects. We have limited our preliminary results to EEG signal analysis of a left/right/rest motor imagery task as this is the most popular signal used in BCI applications. We have previously shown [3] that our proposed CNN can reliably decode intentions utilizing same architecture for multiple subjects. Here, we are extending our method to 4 new subjects and show that drastically reducing channels has insignificant effect on decoding results. We have also expanded our decoding results to 3-class classification and obtained the same decoding accuracies by only using a few channels for classification. 

Our results show that drastically reducing the complexity of data, can still yield comparable performance while using a single decoder for multiple subjects. Since the choice of the channels for decoding is based on the mental task, one can envision the use of these methods to create practical, reliable online BCI solutions. Also using raw data for analysis and the use of a single architecture to classify all subjects allows for a hardware to be designed to even further improves efficiency of the system.

Speakers
avatar for Farhad Goodarzy

Farhad Goodarzy

MDHS, The University of Melbourne



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 09

9:00pm CEST

P162: Approximative models for enabling multi-scale simulations
Zoom: 

https://tuni.zoom.us/j/64993452348

Mikko Lehtimäki, Marja-Leena Linne, Lassi Paunonen

In computational neuroscience there is a great demand to incorporate more molecular and cellular level detail into mathematical models of neuronal networks. This is deemed necessary in order to recreate phenomena such as learning, memory and behavior in silico. However, numerical simulation of such multi-scale models is resource intensive, if not impossible. This problem has been partially overcome by using simplified synapse, neuron and population models that replace biological variables and mechanisms from the models with phenomenological descriptions. While useful, this approach causes information loss that might diminish the value of such models, as the variables might lack biological meaning.

In this study we present approximation as an alternative to simplification. By using mathematical model order reduction (MOR) methods approximations can be derived algorithmically. Here we compute reduced models with the Discrete Empirical Interpolation Method (DEIM) [1] algorithm along with its advanced variants. The appeal of these methods is that there is no need to linearize the model, make assumptions of the system behavior or discard any variables. A reduced model can be simulated efficiently in a low-dimensional subspace where a smaller number of equations needs to be solved. An approximation of the original high-dimensional model can be reconstructed at any time (Fig 1). The acceleration in simulation time gained this way requires no special hardware and can be readily implemented in any programming language.

We discuss results from approximating three nonlinear systems; chemical reactions in the synapse, a compartmental neuronal network and a multi- dimensional mean-field model [2-4]. We have made the code to approximate the mean-field model open source [5]. We demonstrate the value of reduced models in computational neuroscience and explain the pros and cons of several different reduction methods with regards to the above models. Especially implementation of mathematical model order reduction algorithms in neuronal simulators and using reduced models in neuromorphic hardware are potential applications of these methods for enabling multi-scale simulations of brain activity.

Acknowledgements

M.L. is supported by TUNI Graduate School, M.-L.L by Academy of Finland grant 297893 and L.P. by grants 298182 and 310489. This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2).

References

1 S. Chaturantabut and D. Sorensen, “Nonlinear model reduction via discrete empirical interpolation,” SIAM Journal on Scientific Computing, vol. 32, no. 5, pp. 2737–2764, 2010.

2 M. Lehtimäki, L. Paunonen, S. Pohjolainen, and M.-L. Linne, “Order reduction for a signaling pathway model of neuronal synaptic plasticity,” IFAC- PapersOnLine, vol. 50, no. 1, pp. 7687–7692, 2017.

3 M. Lehtimäki, L. Paunonen, and M.-L. Linne, “Projection-based order reduction of a nonlinear biophysical neuronal network model,” 2019 Proceedings of the IEEE Conference on Decision and Control (CDC). IEEE, 2020 (in press).

4 M. Lehtimäki, I. Seppälä, L. Paunonen, and M.-L. Linne. Accelerated simulation of a neuronal population via mathematical model order reduction. Proceedings of the IEEE International Conference on Artificial Intelligence Circuits and Systems, 2020 (in press).

5 https://github.com/Mikkolehtimaki/neuro-mor

Speakers
avatar for Mikko Lehtimäki

Mikko Lehtimäki

Doctoral student, Faculty of Medicine and Health Technology, Tampere University



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 14

9:00pm CEST

P166: Compensating method for the lack of connection on topographic neuron network edge
https://meet.google.com/wfh-rekd-etw

Cecilia Romaro, Antonio Carlos Roque, Jose Roberto Castilho Piqueira
Studying the dynamics of a neuron network has been a challenge to computational Neuroscience [1,2]. Doing so in a neuron network with topographic organization is even more demanding due to the boundary condition, i.e. the interruption of the topographic pattern of connection in network edges, which changes network boundary activity. The neurons on the edge of the network present underside behavior due to a lack (or excess) of connections and a torus solution may introduce undesired oscillations. Facing such strain, this work presents a method based on mean field potential (i.e. first and second-order statistics of neuron network dynamics) to sustain neuron boundary activity – such as neurons on the core of the layer network activity – without introducing an oscillation component.

This method is based on the rescaling presented on CNS previous works (CNS-2018) and consists of:

Step 1: Calculating the scale factor k_i for any neuron i in network as follows: For a neuron i, k_i is given by the average of total number of connections received divided by the average of total number of connections that would be received IF the network had no boundaries – was a set of infinity neurons;

Step 2: Increasing the synaptic weights by dividing them by the square root of the scale factor;

Step 3: Providing each cell with a DC input current with a value corresponding to the total input lost due to network edge (boundary cut).

In essence, the boundary correction method numerically estimates the normalized density function of connection on the first step, then weights each neuron connection based on this density, and finally balances the threshold to grant the neuron/layer activity. This method was successfully applied on consolidated models such as Brunel [1] and PD [2], among others.

Firstly the models were reimplemented and the results were reproduced. Secondly, a topographic patter of connection was introduced to the models including the consideration that neurons near each other have a higher probability of connection then those further from each other. A different activity rises on both network boundary neurons and sometimes on core neurons. This method was applied and the activities were driven back to the original ones.

The algorithmic of rescaling method can be found in any one of example- application available in GitHub (https://github.com/ceciliaromaro/recoup-the- first-and-second-order-statistics-of-neuron-network-dynamics)

Acknowledgements

This work was produced as part of the activities of FAPESP Research, Disseminations and Innovation Center for Neuromathematics (Grant 2013/07699-0, S. Paulo Research Foundation).

References

[1] Brunel, N.Dynamics of sparsely connected networks of excitatory and inhibitory spikingneurons.Journal of computational neuroscience 8, 3 (2000), 183–208

[2] Potjans TC and Diesmann M (2014). The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral Cortex 24:785-806.

Speakers
avatar for Cecilia Romaro

Cecilia Romaro

Physics Department, University of Sao Paulo



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 03

9:00pm CEST

P167: Modelling ipRGC-influenced light response on circadian phase, melatonin suppression and subjective sleepiness.
Zoom Meeting :    https://uni-sydney.zoom.us/j/99904979215

Tahereh Tekieh
, Peter Robinson, Steven Lockley, Stephan McCloskey, M.s Zobaer, Svetlana Postnova

A physiologically-based model of arousal dynamics is extended to incorporate the spectral effects of light (as an input to the model) on the circadian rhythms, melatonin dynamics and subjective sleepiness. Doing this, photopic illuminance in the model is replaced with melanopic irradiance which, reflects the role of melanopsin, a photopigment expressed in ipRGCs (intrinsically photosensitive retinal ganglion cells). Melanopsin-expressing ipRGCs are the primary cells in retina mediating the effect of light to different non-visual related brain regions. Melanopsins are short wavelength sensitive and their main target is the circadian clock located in suprachiasmatic nuclei (SCN), with output signals regulating sleep/wake cycles, alertness, and hormone secretion. The melanopic irradiance is thus used as the light input to the model, which affects the dynamic circadian oscillator, melatonin (hormone produced in pineal gland) profile and sleepiness. The dynamic circadian oscillator is extended according to the melanopic irradiance definition and tested against experimental circadian phase dose- and phase-response data. The function which demonstrates melatonin suppression in presence of light re- calibrated against melatonin dose-response data for monochromatic and polychromatic light sources. A new light-dependent term is then introduced into the homeostatic weight component of subjective sleepiness to represent the direct effect of light. The new term responds dynamically to light and is calibrated against experimental data with different light spectrums. The model predictions are compared to a total of 14 experimental studies containing 26 data sets for 14 different spectral light profiles. The extended melanopic model shows an average reduction in prediction error relative to the model used prior. Overall, incorporating melanopic irradiance allows simulation of wavelength-dependent responses to light observed in experiments and explains most of the observations. Models demonstrating the effect of light on circadian dynamics, sleep, and sleepiness need to use ipRGC-influenced responses as a non-visual measure of light; e.g., melanopic irradiance, instead of the traditionally used illuminance based on the visual system.


Article DOI: 10.1111/jpi.12681
Article: Modelling melanopsin-mediated effects of light on circadian phase, melatonin suppression and subjective sleepiness
Journal: Journal of Pineal Research

Speakers
avatar for Tahereh Tekieh

Tahereh Tekieh

Postdoctoral Research Fellow, The University of Sydney



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 03

9:00pm CEST

P168: Spike initiation properties in the axon: simulations in a biophysically detailed model
https://meet.google.com/nib-pjkg-fhj

Nooshin Abdollahi
, Amin Kamaleddin Ezabadi, Stephanie Ratte, Steve Prescott

Spikes are usually initiated at the Axon initial segment (AIS), the most excitable site of a neuron.Yet other regions of the neuron are also excitable; indeed, axonal excitability is critical for spike propagation. While there are many studies on somatic and dendritic excitability, axon excitability has yet to be thoroughly investigated in most neurons because the small size of the axon precludes most experiments. There are some recordings from the cut end of axons (i.e. blebs) suggesting that axons do not spike repetitively during sustained depolarization but, instead, spike only at the onset of abrupt depolarization, consistent with class 3 excitability. However, it remains unclear whether transient spiking accurately reflects axon excitability or is an artifact of axon damage. Using a novel optogenetic approach, recent experiments from our lab have shown that axon does indeed have class 3 excitability. Although the optogenetic method is less invasive than bleb recordings, it still has some limitations that necessitated simulations in order to definitively interpret the experimental results. I have built a multicompartment model of a pyramidal neuron with a detailed myelinated axon that reproduces the observed experimental data collected in our lab. The model has helped us confirm the site of spike initiation based on the shape (kinkiness) of spikes recorded in the soma. Simulations also confirmed that even when targeting the axon for photostimulation, a small degree of stray light can hit the dendrites and evoked spikes in the AIS. The results ultimately confirm that unlike spike initiation in the AIS, which relies on class 1 excitability, spike propagation in the axon occurs on the basis of class 3 excitability (Fig. 1).

Speakers
NA

Nooshin Abdollahi

Institute of Biomaterials and Biomedical Engineering, Univeristy of Toronto



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 20

9:00pm CEST

P170: Optically imaged map of orientation preferences in visual cortex of an Australian marsupial, the Tammar Wallaby Macropus eugenii.
VIRTUAL ZOOM MEETING: https://unimelb.zoom.us/s/94928398820?pwd=N1VzcFpoQ1MzamtRUi9aRmF4aEczUT09#success   Password: 011798
Please email me if you have difficulty entering during the time slot at jungy@student.unimelb.edu.au


Young Jun Jung
, Ali Almasi, Shi Sun, Shaun Cloherty, Hamish Meffin, Michael Ibbotson, Molis Yunzab, Sebastien Bauquier, Marilyn Renfree

Orientation selectivity (OS) is a key feature of neurons in the mammalian primary visual cortex. In rodents and rabbits, these neurons are randomly distributed across V1 while in cats and all primates, cells with similar OS preferences cluster together into cortical columns. Could it be that mammals with smaller primary visual cortices, relatively undifferentiated cortices or poor-resolution vision are restricted to having salt-and-pepper OS maps? This is not true, because in gray squirrel, a highly visual rodent with good spatial resolution, and a V1 that is highly differentiated, no clear functional organisation of OS preferences exists in V1. We do not know yet why the maps coding OS preferences are so radically different in rodents/rabbits compared to the clear similarities across other mammalian visual systems.

Several models of cortical OS maps have been created incorporating Hebbian plasticity, intracortical interactions and the properties of growing axons. But these models mainly focus on maps arising from intracortical interactions. Here we focus on two factors contributing to map formation: the topography of retina and phylogeny. One promising method of predicting whether or not a species has pinwheel maps is to look at the central-to-peripheral ratio (CP ratio) of retinal cell density. We have found that animals with high CP ratios (>7) have orientation columns while those with low CP ratios (<4) have random OS maps. 

 We studied a highly visual marsupial, the Tammar wallaby (Macropus Eugenii), which represents a phylogenetically distinct branch of mammals for which the orientation map structure is unknown. The topography of RCC’s in wallabies is very similar to cats and primates. They have a high density of RGC in the retinal specialization, indicated by a high CP ratio of 20. If orientation columns are the mammalian norm and if species with high CP ratios have OS maps, we would predict the existence of orientation columns in wallaby cortex. We used intrinsic optical imaging and multi-channel electrophysiology methods to examine the functional organization of the wallaby cortex. We found robust OS in a high proportion of cells in the primary visual cortex and clear orientation columns similar to those found in primates and cats but with bias towards vertical and horizontal preferences, suggesting lifestyle-driven variations. The findings suggest that orientation columns are the norm and it might be that the rodents and rabbits are unusual in terms of mammalian cortical architecture.


Speakers
avatar for Young Jun Jung

Young Jun Jung

Graduate Researcher, National Vision Research Institute
Young Jun (Jason) Jung completed his Bachelor of Science degree with Honours majoring in Neuroscience at the University of Melbourne in 2017. He is currently a Ph.D. candidate at the National Vision Research Institute (NVRI) and Optometry and Vision Sciences Department of Melbourne... Read More →



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 05

9:00pm CEST

P176: Using Deep Convolutional Neural Networks to Visualise the Receptive Fields of High Level Visual Cortical Neurons
https://meet.google.com/dwm-mfct-zsx

Brett Schmerl
, Declan Rowley, Elizabeth Zavitz, Hsin-Hao Yu, Nicholas Price, Marcello Rosa

Understanding the image features that are encoded by neurons throughout the hierarchy of visual cortical areas, particularly in areas higher in the hierarchy that have more complex response properties than in V1, is a challenging yet fundamental goal in visual neuroscience that is often achieved by visualising their pattern of responses [1]. Visualising image features responsible for driving activity of individual units in a hierarchical system used for visual processing for the purposes of understanding the system’s functioning and information representation is also encountered in the study of deep convolutional neural networks.

In this study we train deep convolutional neural networks on spiking data recorded from individual neurons in a mid-tier visual area (the dorsomedial area, DM) of the anaesthetised marmoset monkey whilst the animal is presented with changing patterns of spatiotemporally white noise [2]. We show that convolutional neural networks are capable of learning statistically significant input-output relationships of these neurons and are thus able to perform classification of the spiking behaviour of the neuron given the stimuli. Furthermore, we applied deconvolutional techniques [3] used to visualise image features encoded by the convolutional model, thus allowing visualisation of input image features that are significant to determining spiking behaviour, by proxy, of the neuron. A comparison between the features recovered using this technique and those recovered by traditional methods of analysis is presented.

1. Jones, J. P., & Palmer, L. A. (1987). The two-dimensional spatial structure of simple receptive fields in cat striate cortex. Journal of neurophysiology, 58(6), 1187-1211.

2. Lui, L. L., Bourne, J. A., & Rosa, M. G. (2005). Functional response properties of neurons in the dorsomedial visual area of New World monkeys (Callithrix jacchus). Cerebral Cortex, 16(2), 162-177.

3. Zeiler, M. D. & Fergus R. Visualizing and understanding convolutional networks. In European conference on computer vision, 818–833. Springer, 2014.

Speakers
BS

Brett Schmerl

School of Information Technology and Mathematical Sciences, University of South Australia



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 18

9:00pm CEST

P196: Robustness of ultrasonic modulation of the subthalamic nucleus to GABAergic perturbation
_Google meet link_
https://meet.google.com/eoi-ewnc-mmo

Thomas Tarnaud
, Wout Joseph, Ruben Schoeters, Luc Martens, Timothy Van Renterghem, Emmeric Tanghe

Introduction: Deep brain stimulation (DBS) is a surgical treatment for movement and neuropsychiatric disorders. Here, the subthalamic nucleus (STN) is the most common target for the treatment of advanced Parkinson’s disease (PD). Although DBS has proven effective, the procedure is associated with surgical risks such as infection and haemorrhage. Consequentially, we investigated the possibility of using ultrasound (US) as a non-invasive and reversible alternative of conventional DBS. Here, we expand on our study on the spiking behaviour of a computational STN model [1], insonicated with continuous-wave and pulsed US of different intensities. In particular, the sensitivity of the simulated STN response to hyperpolarizing input (e.g., GABAergic globus pallidus afferents) is investigated. Methods: A computational model for insonication of the STN is created by combining the Otsuka-model of a plateau-potential generating STN neuron [2] with the bilayer sonophore model [3-4]. After careful validation of our model implementation by comparison with theoretical and experimental literature, simulations are performed of the STN- neuron insonicated with different ultrasonic intensities and pulse waveforms. The robustness of the simulated response to GABAergic input is tested by injecting brief hyperpolarizing currents. Results: Our model results predict intensity dependent spiking modes of the STN neurons. For continuous waveforms, three different observed spiking modes in order of increasing ultrasonic intensity are low-frequency spiking, high-frequency (>120 Hz) spiking with significant spike-frequency and spike-amplitude adaptation, and a silenced mode. Simulation results indicate that only the silenced mode is robust to brief hyperpolarizing input. In contrast, the STN response will saturate robustly to the pulse repetition frequency in pulsed US, for sufficiently large intensity and pulse repetition frequency. Conclusion: Model results of the ultrasonically stimulated plateau-potential generating STN predict intensity dependent spiking modes that could be useful for the treatment of PD. High-frequency spiking of the STN might “jam” pathological network activity or result in the creation of an information lesion due to short-term synaptic depression, which are potential mechanisms ascribed to conventional DBS. In contrast, the silenced mode in which the STN transmembrane potential is fixed to a stable plateau might be functionally equivalent to subthalamotomy and to depolarization blockage of STN efferents during DBS. The former and latter STN mode is induced robustly by pulsed and continuous wave US, respectively.

_References_

[1] Tarnaud, T., Joseph, W., Martens, L. and Tanghe, E., 2018. Computational Modeling of Ultrasonic Subthalamic Nucleus Stimulation. IEEE Transactions on Biomedical Engineering. [2] Otsuka, T., Abe, T., Tsukagawa, T., & Song, W. J. (2004). Conductance- based model of the voltage-dependent generation of a plateau potential in subthalamic neurons. Journal of neurophysiology, 92(1), 255-264. [3] Plaksin, M., Shoham, S. and Kimmel, E., 2014. Intramembrane cavitation as a predictive bio-piezoelectric mechanism for ultrasonic brain stimulation. Physical review X, 4(1), p.011004. [4] Lemaire, T., Neufeld, E., Kuster, N., & Micera, S. (2019). Understanding ultrasound neuromodulation using a computationally efficient and interpretable model of intramembrane cavitation. Journal of Neural Engineering, 16(4), 046007.

_Google meet link_
https://meet.google.com/eoi-ewnc-mmo

Speakers
TT

Thomas Tarnaud

INTEC WAVES, University of Ghent - IMEC



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 15

9:00pm CEST

P197: Computational Modelling of the Locus Coeruleus
Ruben Schoeters, Thomas Tarnaud, Wout Joseph, Luc Martens, Robrecht Raedt, Emmeric Tanghe

The locus coeruleus (LC) is one of the most dominant noradrenergic systems in the brain that supplies the central nervous system with norepinephrine through widespread efferent projections. Consequently, it plays an important role in attention, feeding behaviour and sleep-to-wake transition [1]. Moreover, studies have shown that the locus coeruleus is correlated to the anticonvulsive action of vagus nerve stimulation (VNS) [2]. To date, the underlying mechanisms of VNS and the LC are, however, not fully understood. Therefore, we derived a computational model, such that in silico investigations can be performed. Based on the work of Carter et al. (2012) [3], we created a single compartment model that matched our in-vivo measurements. These were extracted from rat brains at the 4Brain lab. The original model created by Carter et al. (2012) was a conductance-based model of the locus coeruleus and hypocretin neurons, used for the investigation of the sleep-to-wake transition. When the hypocretin neurons are omitted, our measured tonic firing rate of 3.35±0.49 Hz could not be reached with the original two compartment model by means of continuous current injection. The maximal achievable tonic firing rate was 0.75 Hz for a current of 0.4 A/m², while a bursting behaviour followed by depolarization block was observed for higher inputs. When combined into a single compartment model, the required frequency is reached with a 0.39 A/m² current injection. There were no notable differences in state occupancies that could explain the difference in firing rate. Therefore, we concluded that the lower firing rate observed in the two compartment model is solely due to spatial filtering. Finally, we compared the pinch response. The pinch was modelled as a rectangular current pulse. With an amplitude of 0.0314 A/m² and pulse duration of 0.9 s, an equivalent firing rate (13.64±2.75 Hz vs.13.86 Hz) and refractory period (1.186±0.234 s vs.1.09 s, the measurements and model, respectively) are observed.

_References_

1. Purves, D., Augustine, G. J., Fitzpatrick, D., Hall, W. C., Lamantia, A.-S., Mcnamara, J. O., and Willians, S. M. (2004). Neuroscience, volume 3. 2. Raedt, R., Clinckers, R., Mollet, L., Vonck, K., El Tahry, R., Wyckhuys, T., De Herdt, V., Carrette, E., Wadman, W., Michotte, Y., Smolders, I., Boon, P., and Meurs, A. (2011). Increased hippocampal noradrenaline is a biomarker for efficacy of vagus nerve stimulation in a limbic seizure model. Journal of Neurochemistry, 117(3):461–469. 3. Carter, M. E., Brill, J., Bonnavion, P., Huguenard, J. R., Huerta, R., and de Lecea, L. (2012). Mechanism for Hypocretin-mediated sleep-to-wake transitions. Proceedings of the National Academy of Sciences, 109(39):E2635–E2644.

Speakers
RS

Ruben Schoeters

INTEC WAVES, University of Ghent - IMEC



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 16

9:00pm CEST

P206: Parameter exploration in neuron and synapse models driven by stimuli from living neuron recordings
Manuel Reyes-Sanchez, Irene Elices, Rodrigo Amaducci, Francisco B Rodriguez, Pablo Varona

Virtual Room
https://meet.google.com/ufe-frxk-mby

In this work, we present an approach to automatically explore neuron and synapse model parameter to archive target dynamics or characterize emergent phenomena which rely on the temporal structure of biological recordings that are used as inputs to the models. The associated exploration and mapping allow us to assess the role of different elements in the equations of the neuron and synapse models to build a nontrivial integration of sequential information, which is also reflected in the time course of the corresponding model response. We illustrate this methodology in the context of dynamical invariants defined as cycle-by-cycle preserved relationships between time intervals that build robust sequences in neural rhythms. We have recently unveiled the existence of such invariants in the pyloric CPG of crustacean, even under the presence of intrinsic or induced large variability in the rhythms (Elices et al., 2019). The proposed strategy can be generalized for many types of neural recordings and models. During such protocol, we input biological data with a characteristic temporal structure to different model neurons. The biological recordings are preprocessed online to adapt the corresponding time and amplitude scales to those of the synapse and neuron models using a set of algorithms developed in our previous works (Amaducci et al., 2019; Reyes-Sanchez et al., 2020). Our methodology can then map the neuron and synapse parameters that yield a predefined dynamics taking into account the temporal structure of the model output. The algorithms allow for a full characterization of the parameter space that contributes to the generation of the predefined dynamics. To illustrate this protocol that combines experimental recordings and theoretical paradigms, we have applied it to the search for dynamical invariants established between a living CPG cell and a model neuron connected through a graded synapse model. Dynamical invariants are preserved cycle-by cycle, even during transients. In our validation tests, we have mapped the presence of a linear relationship, i.e. an invariant, between the interval defined by the beginning of the bursting activity of the two neurons (first- to-first spike interval between the living and model neurons) and the instantaneous period of their sequence in such hybrid circuit. The protocol has been used to assess the role of model and synaptic parameters in the generation of the dynamical invariant, achieving a high efficient mapping in a few minutes. We argue that this approach can also be employed to readily characterize optimal parameters in the construction of hybrid circuits built with living and artificial neurons and connections, and, generally, to validate neuron and synapse models.

Funded by AEI/FEDER PGC2018-095895-B-I00 and TIN2017-84452-R

References

Amaducci, R., Reyes-Sanchez, M., Elices, I., Rodriguez, F. B., and Varona, P. (2019). RTHybrid: a standardized and open-source real-time software model library for experimental neuroscience. Front. Neuroinform. 13, 11. doi:10.3389/fninf.2019.0001

Elices, I., Levi, R., Arroyo, D., Rodriguez, F. B., and Varona, P. (2019). Robust dynamical invariants in sequential neural activity. Sci. Rep. 9, 9048. doi:10.1038/s41598-019-44953-2

Reyes-Sanchez, M., Amaducci, R., Elices, I., Rodriguez, F. B., and Varona, P. (2020).Automatic adaptation of model neurons and connections to build hybrid circuits with living networks. Neuroinformaticcs. doi:10.1007/s12021-019-09440-z

Speakers
avatar for Manuel Reyes-Sanchez

Manuel Reyes-Sanchez

PhD Student, Grupo de Neurocomputacion Biologica, Universidad Autonoma de Madrid
Hybrid circuits, closed-loop, computational neuroscience, machine learning.



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 10

9:00pm CEST

P207: Hybrid robot driven by a closed-loop interaction with a living central pattern generator with online feedback
Google Meet link: meet.google.com/gde-trxz-vdu
Rodrigo Amaducci, Irene Elices, Manuel Reyes-Sanchez, Alicia Garrido-Peña, Rafael Levi, Francisco B Rodriguez, Pablo Varona
A hybrid robot or hybrot is a technology that combines living cells and networks with robotics. This technology is largely undeveloped and has been mainly implemented with neuron cultures and multichannel electrode arrays [1,2]. Hybrots have a lot of potential to study neural networks properties involved in the control of locomotion, sensorimotor transformation and behavior. Central pattern generators (CPG) are neural circuits that produce robust rhythmic sequences involved in motor functions such as breathing or walking. Because of their role in generating and coordinating motor rhythms, bio- inspired CPGs have been widely employed in robotic paradigms [3] including the design of novel mechanisms for autonomous locomotion [4]. However, the intrinsic mechanisms that give rise to the coordination of living CPG dynamics have not been used yet for hybrid robot implementation. In this work, we present the first hybrot controlled by a living CPG from the crab Carcinus Maenas. The robot and the living neural circuit are connected following a closed-loop protocol that involves a dynamic-clamp setup to communicate both elements through Bluetooth signaling. We show that effective robotic locomotion is achieved when it is controlled and coordinated by the flexible rhythmic sequences produced by the circuit of living motoneurons. The robot is equipped with a light sensor that sends a sensory feedback to the CPG in the form of intracellular current injection. We report the analysis of the presence of dynamical invariants in the intervals that build up the sequential activations of the living circuit [5] and how they are transmitted to the robot resulting in a coordinated locomotion. In turn, the robotic sensory feedback is translated into a variation of the living network activity while keeping the motor sequence, which results in a coherent response to the change in the environmental light.

Acknowledgements We acknowledge support from AEI/FEDER PGC2018-095895-B-I00 and TIN2017-84452-R.

References
1\. Potter SM. Hybrots: hybrid systems of cultured neurons+robots, for studying dynamic computation and learning. Proc 2002 Simul Adapt Behav 7 Work Mot Control Humans Robot Interplay Real Brains Artif Devices. Edinburgh, Scotland; 2002.
2\. Li Y, Sun R, Wang Y, Li H, Zheng X. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network- Controlled Closed-Loop Environment. PLoS One. Public Library of Science; 2016;11:e0165600. https://doi.org/10.1371/journal.pone.0165600
3\. Ijspeert AJ. Central pattern generators for locomotion control in animals and robots: a review. Neural Netw. 2008;21:642–53. http://dx.doi.org/10.1016/j.neunet.2008.03.014
4\. Herrero-Carrón F, Rodríguez FB, Varona P. Bio-inspired design strategies for central pattern generator control in modular robotics. Bioinspiration and Biomimetics. 2011;6:16006. http://dx.doi.org/10.1088/1748-3182/6/1/016006
5\. Elices I, Levi R, Arroyo D, Rodriguez FB, Varona P. Robust dynamical invariants in sequential neural activity. Sci Rep. 2019;9:9048. https://doi.org/10.1038/s41598-019-44953-2

Speakers
avatar for Rodrigo Amaducci

Rodrigo Amaducci

PhD Student, Grupo de Neurocomputación Biológica (GNB), Universidad Autónoma de Madrid



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 11

9:00pm CEST

P208: NEST 3.0, NESTML and NEST Desktop: new user experience and classroom readiness
Markus Diesmann, Jochen M. Eppler, Susanne Kunkel, Charl Linssen, Håkon Mørk, Abigail Morrison, Hans Ekkehard Plesser, Sebastian Spreizer, Stine Brekke Vennemo

Over the last year, major advances have taken place in NEST Simulator and its associated tooling. This poster describes updates in NEST 3.0, NESTML and NEST Desktop.

NEST 3.0 is the next major version update of NEST. With it, changes are made not only to the user interface but also to the inner workings of NEST. In the PyNEST interface, new concepts are introduced for the compact and efficient description of large populations of neurons and synapses as well as distributions of parameter values. The PyNEST Topology module is integrated into the standard PyNEST package, so that creation and connection of spatial networks can now be performed by calling the standard functions. NEST 3.0 improves the expressiveness of model descriptions and the speed of network creation. A new and improved infrastructure for handling recordings has been implemented, with built-in backends to record to memory, ASCII files and screen.

NESTML is a domain-specific language for neurons and synapses. It serves as a specification and exchange format, where dynamical systems are expressed in continuous time (e.g., using differential equations) and have the additional ability to receive and emit precisely timed events (representing action potentials). Feature highlights include a concise yet expressive syntax inspired by Python, direct entry of dynamical equations, and imperative programming-style specification of event handling and generation.

NESTML comes with a powerful toolchain, written in Python, and is released under the GNU GPL v2.0. It parses a given model and performs code generation (“transpiling”). The generated code targets a particular hardware and software platform (e.g. NEST running on a high-performance computing cluster) with highly optimised and performant code. The toolchain performs detailed analytical and numerical analysis to yield optimal solver recommendations, and precise solutions where possible. Target platforms can be added flexibly using Jinja2 templates. As a result, NEST users can now specify neuron and synapse models in the same way they specify the network structure, using a domain- specific language that is independent of the underlying C++ code.

NEST Desktop is a web-based graphical user interface which enables the rapid construction, parametrization, and instrumentation of neuronal network models typically used in computational neuroscience. The client-server architecture supports installation-free access to NEST. The primary objective is to provide an accessible classroom tool that allows users to rapidly explore neuroscience concepts without the need to learn a simulator control language at the same time. NEST Desktop opens NEST technology for a new user group, namely students in the classroom, and contributes to equal opportunities in education.

These advances, combined with work on the user-level documentation and deployment mechanisms, contribute to the creation and maturation of the NEST ecosystem as a component of a software infrastructure for neuroscience.

**Funding statement**

This research has received partial funding from the Helmholtz Association through IVF no. SO-092 (Advanced Computing Architectures, ACA) and the European Union’s Horizon 2020 research and innovation programme under grant agreement No 720270 (HBP SGA1) and No 785907 (HBP SGA2) and the EU Horizon 2020 programme “DEEP-EST” (contract no. ICT-754304). Use of the JURECA supercomputer through VSR grant JINB33.

Speakers
avatar for Charl Linssen

Charl Linssen

Jülich Research Centre, Germany


Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 12

9:00pm CEST

P217: Autaptic Connections Can Implement State Transitions in Spiking Neural Networks for Temporal Pattern Recognition
 
Google meet link: https://meet.google.com/tyj-vofp-krq

  Muhammad Yaqoob, Volker Steuber, Borys Wróbel
In biological neuronal networks, autaptic connection or autapses are synaptic connections between the axon and dendrites of a single neuron, which can be either excitatory (glutamatergic) or inhibitory (GABAergic). Since their first discovery four decades ago [2], the existence of autapses has now been documented in various brain regions including neocortex, hippocampus and cerebellum [1]. However, the functional role of autapses is still unknown [3]. In this work, we show the importance of autapses for temporal pattern recognition in simple spiking neural networks. The computational task is to recognise a specific signal sequence in a stream of inputs so that a single output neuron spikes for the correct input signal, while remaining silent for other input signals. Having understood the role of autapses and the resulting switching mechanism in networks evolved for recognising signals of length two and three [4], we were able to define rules for constructing the topology of a network handcrafted for recognising a signal sequence of length m with n interneurons. We show that autapses are crucial for switching the network between states and observe that a minimal network recognising a signal of length m requires at least (m-1) autaptic connections. In contrast to solutions obtained by the evolutionary algorithm in [4] we show that the number of interneurons required to recognise a signal is equal to the length of the signal. Finally, we demonstrate that a successful recogniser network (where n is greater than or equal to three) must have three specialised neurons: a “lock”, “switch” and “accept” neuron, in addition to the other state maintaining neurons (N0, N1, … Nn-4), whose number depends on length of the signal. All interneurons in the network require an excitatory autaptic connection, apart from the “accept” neuron. The “lock” neuron is always active (thanks to an excitatory autapse), which prevents the output from spiking except when the network receives the second to last correct input signal and allows the output neuron to spike in response to the correct last input. If the lock is released by the second to last correct input signal, the “accept” neuron (i) produces spike/s in the output neuron when the network receives the last correct input and (ii) sends a signal to the “switch” neuron, which transforms the network back into the start state. The “switch” neuron is responsible for the transition between the network start state and other possible inter-signal network states. In the future, we intend to explore other functional roles of autapses and higher-order loops in larger neuronal networks.

References

1. Alberto B. and Huguenard J. Enhancement of spike-timing precision by autaptic transmission in neocortical inhibitory interneurons. Neuron. 2006, 49, 119 –130. 2. Loos H. and Glaser E. Autapses in neocortex cerebri: synapses between a pyramidal cells axon and its own dendrites. Brain Research. 1972, 48, 355 – 360. 3. Wiles L., Shi G., Pasqualetti F., Bassett D., and Meaney D. Autaptic connections shift network excitability and bursting. Scientific Reports. 2016, 7, 1–15. 4. Yaqoob M., Steuber V., Wróbel B. The Importance of Self-excitation in Spiking Neural Networks Evolved to Recognize Temporal Patterns. ICANN: Artificial Neural Networks and Machine Learning. Lecture Notes in Computer Science, Springer Cham. 2019, 11727, 758–771. ** **

Speakers
YM

Yaqoob Muhammad

Department of Computer Science, University of Hertfordshire



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 14

9:00pm CEST

P44: 3D modeling of Purkinje cell activity
Alexey Martyushev, Erik De Schutter

The NEURON software remains the main neural physiology modeling tool for scientists. Its computational methods benefit from deterministic approximations of the cable equation solutions and 1-dimensional radial calcium diffusion in cylindrical neuron morphologies [1]. However, in real neurons ions diffuse in 3-dimensional volumes [2]and membrane channels get activated in a stochastic manner. Furthermore, NEURON does not suit to model nano-sized spine morphology. In contrast, the Stochastic Engine for Pathway Simulation (STEPS) uses fully stochastic 3-dimensional methods in tetrahedral morphologies that can provide realistic modeling of neurons at the nanoscale [3, 4].

In this work, we compare the modeling results between those two environments for the Purkinje cell model developed by Zang et al. [5]. This model considers a variety of calcium, potassium and sodium channels, and the resulting calcium concentrations affecting the membrane potential of a Purkinje cell. The results demonstrate that: (i) the used cylinder light microscopy morphology can not be identically transformed into a 3D mesh; (ii) the effect of stochastic channel activation determines the timing of membrane potential spikes; (iii) the kinetics of calcium activated potassium channels strongly depends on the specified sub-membrane volumes in both environments.

A further step in developing the model will be integration of a digital microscopy reconstruction of spines to the existing 3D tetrahedral mesh.

**References:**

1. Carnevale, N.T. and M.L. Hines, The NEURON Book. 2009: Cambridge University Press.

2. Anwar, H., et al., Dendritic diameters affect the spatial variability of intracellular calcium dynamics in computer models. Front Cell Neurosci, 2014. 8: p. 168.

3. Hepburn, I., et al., STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies. BMC Syst Biol, 2012. 6: p. 36.

4. Chen, W. and E. De Schutter, Time to Bring Single Neuron Modeling into 3D. Neuroinformatics, 2017. 15(1): p. 1-3.

5. Zang, Y., S. Dieudonne, and E. De Schutter, Voltage- and Branch-Specific Climbing Fiber Responses in Purkinje Cells. Cell Rep, 2018. 24(6): p. 1536-1549.

Speakers
AM

Alexey Martyushev

Computational Neuroscience Unit, Okinawa Institute of Science and Technology (OIST)


Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 04

9:00pm CEST

P51: Data-driven multiscale modeling with NetPyNE: new features and example models
Google Meet: https://meet.google.com/sjp-odeu-vrr
Poster Link: https://static.sched.com/hosted_files/cns2020online/25/NetPyNE_CNS_2020_poster.pdf

Joe Graham
, Matteo Cantarelli, Filippo Ledda, Dario Del Piano, Facundo Rodriguez, Padraig Gleeson, Samuel A. Neymotin, Michael Hines, William W Lytton, Salvador Dura-Bernal

Neuroscience experiments generate vast amounts of data that span multiple scales: from interactions between individual molecules, to behavior of cells, to circuit activity, to waves of activity across the brain. Biophysically- realistic computational modeling provides a tool to integrate and organize experimental data at multiple scales. NEURON is a leading simulator for detailed neurons and neuronal networks. However, building and simulating networks in NEURON is technically challenging, requiring users to implement custom code for many tasks. Also, lack of format standardization makes it difficult to understand, reproduce, and reuse many existing models.

NetPyNE is a Python interface to NEURON which addresses these issues. It features a user-friendly, high-level declarative programming language. At the network level for example, NetPyNE automatically generates connectivity using a concise set of user-defined specifications rather than forcing the user to explicitly define millions of cell-to-cell connections. NetPyNE enables users to generate NEURON models, run them efficiently in automatically parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for a wide variety of visualizations and analyses. NetPyNE facilitates sharing by exporting and importing standardized formats (NeuroML and SONATA), and is being widely used to investigate different brain phenomena. It is also being used to teach basic neurobiology and neural modeling. NetPyNE has recently added support for CoreNEURON, the compute engine of NEURON optimized for the latest supercomputer hardware architectures.

In order to make NetPyNE accessible to a wider range of researchers and students, including those with limited programming experience, and to encourage further collaboration between experimentalists and modelers, all its functionality is accessible via a state-of-the-art graphical user interface (GUI). From a browser window, users can intuitively define their network models, visualize and manipulate their cells and networks in 3D, run simulations, and visualize data and analyses. The GUI includes an interactive Python console which synchronizes with the underlying Python-based model.

The NetPyNE GUI (Fig. 1) is currently being improved in several ways. _Flex Layout_ is being introduced to ensure a responsive, customizable GUI layout regardless of screen size or orientation. _Redux_ is being added to the stack to ensure the complete state of the app is known at all times, minimizing bugs and improving performance. _Bokeh_ is being used to create interactive plots. Furthermore, by integrating NetPyNE with _Open Source Brain_ , users will be able to create online accounts to manage different workspaces and models (create, save, share, etc.). This will allow interaction with online repositories to pull data and models into NetPyNE projects, from resources such as _ModelDB_ , _NeuroMorpho_ , _GitHub_ , etc.

In this poster, we present the latest improvements in NetPyNE and discuss recent data-driven multiscale models utilizing NetPyNE for different brain regions, including: primary motor cortex, primary auditory cortex, and a canonical neocortex model underlying the Human Neocortical Neurosolver, a software tool for interpreting the origin of MEG/EEG data.

Acknowledgments

Supported by NIH U24EB028998, U01EB017695, DOH01-C32250GG-3450000, R01EB022903, R01MH086638, R01DC012947, and ARO W911NF1910402

Speakers
avatar for Joe Graham

Joe Graham

Research Scientist, SUNY Downstate, USA



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 11

9:00pm CEST

P55: Identifying Changes in Whole-Brain Functional Connectivity in Complex Longitudinal Clinical Trials
Sidhant Chopra, Kristina Sabaroedin, Shona Francey, Brian O'Donoghue, Vanessa Cropley, Barnaby Nelson, Jessica Graham, Lara Baldwin, Steven Tahtalian, Hok Pan Yuen, Kelly Allott, Mario Alvarez, Susy Harrigan, Christos Pantelis, Stephen Wood, Patrick McGorry, Alex Fornito

Resting-state Functional Magnetic Resonance Imaging (rs-fMRI) is increasingly being used as a secondary measure in complex clinical trials [1]. The inclusion of rs-fMRI allows researchers to investigate the impact interventions, such as medication, can have on regional and network-level brain hemodynamics. Such trials are expensive, difficult to conduct, often have small samples in rare clinical populations and high attrition rates. Standard neuroimaging analysis software are not usually suited to these sub- optimal design parameters. Accessible statistical tools that are robust to these conditions are much needed.

We propose an analysis workflow, which combines 1) ordinary least squares marginal model with a robust covariance estimator to account for within- subject correlation, 2) nonparametric p-value inference using a novel bootstrapping method [2] and, 3) edge- and component-level FWE control using the Network Based Statistic [3] . This workflow has several advantages, including being robust to unbalanced longitudinal samples, small-sample correction using heteroskedasticity-consistent standard errors and simplified nonparametric inference. Additionally, this method is computationally less demanding than traditional mixed-linear models and does not bias the analysis by pre-selecting regions of interest.

We apply this novel workflow to a world-first triple-blind longitudinal placebo-controlled trial where 62 antipsychotic-naïve people aged between 15 to 24 with first-episode psychosis received either an atypical antipsychotic or a placebo pill over a treatment period of 6 months. Both patient groups received intensive psychosocial therapy. A third healthy control group with no psychiatric diagnosis (n=27) was also recruited. rs-fMRI scans were acquired at baseline, 3-months and 12-months. We show that our analysis method is sufficiently sensitive to detect FWE-corrected significant components in this complex three groups [healthy control, placebo, medication] by three time points [baseline, 12-weeks, 52-weeks] design.

Here, we introduce an analysis workflow which is capable of detecting changes in resting-state functional networks in complex clinical trials with multiple timepoints and unbalanced groups. 

References 1\. O'Donoghue, B., Francey, S. M., Nelson, B., Ratheesh, A., Allott, K., Graham, J., ... & Polari, A. (2019). Staged treatment and acceptability guidelines in early psychosis study (STAGES): A randomized placebo controlled trial of intensive psychosocial treatment plus or minus antipsychotic medication for first‐episode psychosis with low‐risk of self‐harm or aggression. Study protocol and baseline characteristics of participants. Early intervention in psychiatry, 13(4), 953-960.

2. Guillaume, B., Wang, C., Poh, J., Shen, M. J., Ong, M. L., Tan, P. F., ... & Qiu, A. (2018). Improving mass-univariate analysis of neuroimaging data by modelling important unknown covariates: Application to Epigenome-Wide Association Studies. NeuroImage, 173, 57-71.

3\. Zalesky, A., Fornito, A., & Bullmore, E. T. (2010). Network-based statistic: identifying differences in brain networks. Neuroimage, 53(4), 1197-1207.

Speakers
SC

Sidhant Chopra

Turner Institute for Brain and Mental Health, Monash University



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 07

9:00pm CEST

P56: Impact of Simulated Asymmetric Interregional Cortical Connectivity on the Local Field Potential
****Google Meet****
meet.google.com/wer-tprf-fwy


David Boothe, Alfred Yu, Kelvin Oie, Piotr Franaszczuk
Spontaneous neuronal activity as observed using electroencephalogram is characterized by a non-stationary 1/f power spectrum interspersed with periods of rhythmic activity [1]. Underlying cortical neuronal activity is, by contrast, hypothesized to be sparse and arrhythmic [2]. Properties of cortical neuronal connectivity such as sparsity, small world organization, and conduction delays have all been proposed to play a critical role in the generation of spontaneous brain activity. However, the relationship between the structure reflected in measures of global brain activity, the underlying neuronal activity, and neuronal connectivity is, at present, poorly characterized. In order to explore the role of cortical connectivity in the generation of spontaneous brain activity, we present a simulation of cerebral cortex based on the Traub model [3] implemented in the GENESIS neuronal simulation environment.

We made extensive changes to the original Traub model in order to more faithfully reproduce the spontaneous cortical activity described here. We re- tuned the original Traub parameters to eliminate both intrinsic neuronal activity and removed the gap junctions. Tuning out intrinsic neuronal activity in the model allowed changes to the underlying connectivity to be the central factor in modifying overall model activity. The model we present consists of 16 simulated cortical regions each containing 976 neurons (15,616 neurons total). Previously we connected simulated regions in a nearest neighbor fashion via short range association fibers. These association fibers originated from pyramidal cells in cortical layer 2/3 (P23s). We found that the introduction of symmetric bidirectional inter-regional connectivity was sufficient to induce both a 1/f power spectrum as well as oscillatory behavior in the local field potential of the underlying cortical regions in the 2 to 40 Hz range. However we also found that sub-region activity was fairly uniform, even if these sub-region oscillations were not strongly correlated with one another. We hypothesize that introducing asymmetric inter-regional connectivity in this model may produce underlying simulated neuronal activity that is more variable in its output and more similar to the output observed in the biological system..

Connectivity between cortical regions in the biological brain are often asymmetric with outputs of layer 2/3 pyramidal cells terminating in different layers and in different proportions on receiving regions of cortex [4]. Here we explore how these asymmetrical connectivity schema alter microscopic (spikes) and macroscopic (local field potential) features of our cortical simulations. We re-organized our 16 simulated cortical regions in a hierarchical fashion using feedforward and feedback connectivity patterns observed between regions of the visual system [4]. We then compare the behavior of this network to our previous simulations using nearest neighbor and small world like inter-regional connectivity. We hypothesize that networks with asymmetric connectivity between regions will give richer and more heterogenous model outputs.

[1] Le Van Quyen M, Biol Res, 2003, 36(1), 67-88. [2] Buzaki G, ‘Rhythms of the Brain’, Oxford University Press, 2006. [3] Traub RD et al, J Neurophysiol, 2005, 93(4), 2194-232. [4] Salin P and Bullier J, Physiological Reviews Vol. 75 No.1 January 1995.

Speakers
DB

David Boothe

Human Research Engineering Directorate, U.S. Army Research Laboratory


Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 12

9:00pm CEST

P57: Motor Cortex Encodes A Value Function Consistent With Reinforcement Learning
Venkata Tarigoppula, John Choi, John Hessburg, David McNiel, Brandi Marsh, Joseph Francis

Reinforcement learning (RL) theory provides a simple model that can help explain many animal behaviors. RL models have been very successful in describing the neural activity in multiple brain regions and at several spatiotemporal scales ranging from single units up to hemodynamics during the learning process in animals including humans. A key component of RL is the value function, which captures the expected, temporally discounted reward, from a given state. A reward prediction error occurs when there is a discrepancy between the value function and actual reward, and this error is used to drive learning. The value function can also be modified by the animal’s knowledge and certainty of its environment. Here we show that the bilateral primary motor cortical (M1) neural activity in non-human primates (Rhesus and Bonnet macaques either sex) encodes a value function in line with temporal difference RL. M1 responds to the delivery of unpredictable reward (unconditional stimulus (US)), and shifts its value related response earlier in a trial, becoming predictive of expected reward, when reward is predictable due to the presence of an explicit cue (conditional stimulus (CS)). This is observed in tasks performed manually or observed passively and in tasks without an explicit CS, but with a predictable temporal reward environment. M1 also encodes the expected reward value in a multiple reward level CS-US task. Here we extend the Microstimulus temporal difference RL model (MSTD), reported to accurately capture RL related dopaminergic activity, to account for both phasic and tonic M1 reward-related neural activity in a multitude of tasks, during manual trials, as well as observational trials. This information has implications towards autonomously updating brain-machine interfaces.

Speakers
VT

Venkata Tarigoppula

Biomedical Engineering, University of Melbourne


Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 06

9:00pm CEST

P59: Reverse engineering neural networks to identify their cost functions and implicit generative models
To join the video meeting, click this link: https://meet.google.com/yty-hjsf-psm

Takuya Isomura 1
, Karl Friston 2
1 Brain Intelligence Theory Unit, RIKEN Center for Brain Science. 2 Wellcome Centre for Human Neuroimaging, University College London.

It is widely recognised that maximising a variational bound on model evidence – or equivalently, minimising variational free energy – provides a unified, normative formulation of inference and learning [1]. According to the complete class theorem [2], any dynamics that minimises a cost function can be viewed as performing Bayesian inference; implying that any neural network whose activity and plasticity follow the same cost function is implicitly performing Bayesian inference. However, the implicit Bayesian model that corresponds to any given cost function is a more delicate problem. Here, we identify a class of biologically plausible cost functions for canonical neural networks of rate coding neurons, where the same cost function is minimised by both neural activity and plasticity [3]. We then demonstrate that such cost functions can be cast as variational free energy under an implicit generative model in the well-known form of partially observed Markov decision processes. This equivalence means that the activity and plasticity in a canonical neural network can be understood as approximate Bayesian inference and learning, respectively. Mathematical analysis shows that the firing thresholds – that characterise the neural network cost function – correspond to prior beliefs about hidden states in the generative model. This means that the Bayes optimal encoding of hidden states is attained when the network’s implicit priors match the process generating its sensory inputs. The theoretical formulation was validated using _in vitro_ neural networks comprising rat cortical cells cultured on a microelectrode array dish [4, 5]. We observed that _in vitro_ neural networks – that receive input stimuli generated from hidden sources – perform causal inference or source separation through activity-dependent plasticity. The learning process was consistent with Bayesian belief updating and the minimisation of variational free energy. Furthermore, constraints that characterise the firing thresholds were estimated from the empirical data to quantify the _in vitro_ network’s prior beliefs about hidden states. These results highlight the potential utility of reverse engineering generative models to characterise the neuronal mechanisms underlying Bayesian inference and learning.

References
1. Friston, K. (2010). The free-energy principle: a unified brain theory?. Nat. Rev. Neurosci. 11, 127-138. (link)
2. Wald, A. (1947). An essentially complete class of admissible decision functions. Ann. Math. Stat. 18, 549-555. (link)
3. Isomura, T. & Friston, K. (2020). Reverse engineering neural networks to characterise their cost functions. Neural Comput. In press. Preprint is available at https://www.biorxiv.org/content/10.1101/654467v2
4. Isomura, T., Kotani, K. & Jimbo, Y. (2015). Cultured cortical neurons can perform blind source separation according to the free-energy principle. PLoS Comput. Biol. 11, e1004643. (link)
5. Isomura, T. & Friston, K. (2018). In vitro neural networks minimise variational free energy. Sci. Rep. 8, 16926. (link)

Speakers
avatar for Takuya Isomura

Takuya Isomura

Unit leader, RIKEN Center for Brain Science



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 17

9:00pm CEST

P62: Large Scale Discrimination between Neural Models and Experimental Data
The poster meeting virtual room:
meet.google.com/toi-hoie-pfa

Russell Jarvis
, Sharon Crook, Richard Gerkin

Scientific insight is well-served by the discovery and optimization of abstract models that can reproduce experimental findings. NeuroML (NeuroML.org), a model description language for neuroscience, facilitates reproducibility and exchange of such models by providing an implementation- agnostic model description in a modular format. NeuronUnit (neuronunit.scidash.org) evaluates model accuracy by subjecting models to experimental data-driven validation tests, a formalization of the scientific method.

A neuron model that perfectly imitated real neuronal electrical behavior in response to any stimulus would not be distinguishable from experiments by any conventional physiological measurement. In order to assess whether existing neuron models approached this standard, we took 972 existing neuron models from NeuroML-DB.org and subjected them to a standard series of electrophysiological stimuli (somatic current injection waveforms). We then extracted analogous 448 stimulus-evoked recordings of real cortical neurons from the Allen Cell Types database. We applied multiple feature extraction algorithms on the physiological responses of both model simulations and experimental recordings in order to characterize physiological behavior with a very high degree of detail spanning hundreds of features.

After applying dimensionality reduction to this very high dimensional feature space, we show that the real (biological neurons) and simulated (model neurons) recordings are easily and fully discriminated by eye or any reasonable classifier. Consequently, not a single model neuron produced physiological responses that could be confused with a biological neuron. Was this a defect of the model design (e.g. key mechanisms unaccounted for) or of model parameterization? The remaining post- optimization disagreement between models and biological neurons may reflect limitations of model design and can be investigated by probing the key features used by classifiers to distinguish these two populations.

Speakers
avatar for Russell Jarvis

Russell Jarvis

Neuroscience, Arizona State University
I am interested in Free and Open Source Toolchains, the application of FOS technology to neuronal data sets. I am especially interested in neuromorphic computing, GPU network models, and information theory.



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 08

9:00pm CEST

P68: Large-scale calcium imaging of spontaneous activity in larval zebrafish reveals signatures of criticality
https://uqz.zoom.us/j/91954904541
Michael McCullough, Robert Wong, Zac Pujic, Biao Sun, Geoffrey J. Goodhill
Neural networks in the brain may self-organise such that they operate near criticality, that is, poised on the boundary between phases of order and disorder [1]. Models of neural networks tuned close to criticality are optimal in terms of dynamic range, information transmission, information storage and computational adaptability [2]. Most experimental evidence for criticality in the brain has come from studies of high resolution neural spiking data recorded from tissue cultures or anaesthetised animals using microelectrode arrays, or from studies of mesoscopic-scale neural activity using magnetic resonance imaging or electroencephalograms. These approaches are inherently limited either by under-sampling of the neural population or by coarse spatial resolution. This can be problematic for empirical studies of criticality because the characteristic dynamics of interest are theoretically scale-free. Recently, Ponce-Alvarez et al. [3] investigated the larval zebrafish as a new model for neural criticality by utilising the unique properties of the organism that enable whole-brain imaging of neural activity in vivo and without anaesthetic. They identified hallmarks of neural criticality in larval zebrafish using 1-photon calcium imaging and voxel-based analysis of neuronal avalanches. Here we addressed two key limitations of their study by instead using 2-photon calcium imaging to observe truly spontaneous activity, and by extracting neural activity time series at single-cell resolution via state-of- the-art image segmentation [4]. Our data comprise fluorescence time series for large populations of neurons from 3-dimensional volumetric recordings of spontaneous activity in the optic tectum and cerebellum of larval zebrafish with pan-neuronal expression of GCaMP6s (n=5; approx. 10000 neurons per fish) (Fig. 1A). Neuronal avalanche statistics revealed power-law relationships and scale-invariant avalanche shape collapse which are consistent with crackling noise dynamics from a 3-dimensional random field Ising model [5] (Fig. 1B-C). Observed power laws were validated using shuffled surrogate data and log- likelihood ratio tests. This result provides the first evidence of criticality in the brain from large-scale in vivo neural activity at single cell resolution. Our findings demonstrate the potential of larval zebrafish as a model for the investigation of critical phenomena in the context of neurodevelopmental disorders that may perturb the brain away from criticality. 1\. Cocchi L, Gollo L L, Zalesky A, Breakspear M. Criticality in the brain: A synthesis of neurobiology, models and cognition. Prog Neurobiol. 2017, 158, 132–152. 2\. Shew W L, Plenz D. The functional benefits of criticality in the cortex. Neuroscientist. 2013, 19(1), 88–100. 3\. Ponce-Alvarez A, Jouary A, Privat M, et al. Whole-brain neuronal activity displays crackling noise dynamics. Neuron. 2018, 100(6), 1446–1459. 4\. Giovannucci A, Friedrich J, Gunn P, et al. CaImAn an open source tool for scalable calcium imaging data analysis. Elife. 2019, 8, e38173. 5\. Sethna J P, Dahmen K A, Myers C R. Crackling noise. Nature. 2001, 410(6825), 242-250.

Speakers
MM

Michael McCullough

Queensland Brian Institute, University of Queensland



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 04

9:00pm CEST

P6: Inference of functional connectivity in living neural networks
Sarah Marzen, Martina Lamberti, Michael Hess, Jacob Mehlman, Nadia Bolef, Denise Hernandez, Joost le Feber

In experiments with stimuli, we often wish to assess changes in connectivity between neurons as the experiment progresses. There are a number of methods for assessing connectivity with a variety of drawbacks, but it is not clear that these methods are connected to one another. Furthermore, it is not clear that these functional connectivities are connected to real synaptic connectivities. We present some evidence that functional connectivities from two disparate methods (Conditional Firing Probability analysis and Maximum Entropy analysis) and synaptic connectivities in one dynamical model (that of leaky integrate-and-fire neurons) are all related.
To talk to me during the poster session hour, please Zoom with:
Meeting ID: 511 196 0913Password: Marzen

Speakers
SM

Sarah Marzen

Assistant Professor of Physics, Pitzer, Scripps, and Claremont McKenna College
information theory and dynamical systems, with an emphasis on prediction



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 09

9:00pm CEST

P89: Quantification of changes in motor imagery skill during brain-computer interface use
Google Meet Link: meet.google.com/mqh-qfnn-aqq 

James Bennett
, David Grayden, Anthony Burkitt, Sam John

Oscillatory activity over the sensorimotor cortex, known as sensorimotor rhythms, can be modulated by the kinaesthetic imagination of limb movement [1]. These event-related spectral perturbations can be observed in electroencephalography (EEG), offering a potential way to restore communication and control to people with severe neuromuscular conditions via a brain-computer interface (BCI). However, the ability of individuals to produce these modulations varies greatly across the population. Between 10-30% of people are unable to influence their SMRs sufficiently to be distinguishable by a BCI decoder [2]. Despite this, it has been shown that users can be trained to improve the extent of their SMR modulations. This research utilised a data-driven approach to characterise the skill development of participants undertaking a left- and right-hand motor imagery experiment.

Two publicly available motor imagery EEG datasets were analysed. Dataset 1 consisted of EEG data from 47 participants performing 200 trials of left- and right-hand motor imagery within a single session [3]. No real-time visual feedback was provided to the participants. Dataset 2 contained EEG from two sessions of 200 trials each from 54 participants [4]. Visual feedback was provided to users in the second session but not in the first. Various metrics characterising mental imagery skill were calculated across time for each participant.

The discriminability of EEG in the 8-30Hz range from left- and right-hand trials was found to increase across time for both datasets. Despite the overall improvement, there was great variability in the change of motor imagery skill across participants. For Dataset 1, the average change across time of the metric representing the discriminability of classes was 6.0±21.9%. For Sessions 1 and 2 of Dataset 2, the discriminability increased by 11.8±44.0% and 17.4±30.7%, respectively. Session 2 of Dataset 2 contained visual feedback and produced a larger overall improvement in motor imagery skill with a lower variability compared with Session 1.

In this work, we investigated the level of motor imagery skill acquisition during BCI use. The results indicate a baseline level of skill improvement that can be expected, and also emphasise the large variability across participants commonly seen in BCI studies. Overall, we provide a useful reference of BCI skill acquisition for future research that seeks to increase the rate of skill improvement and decrease the amount of variability.

References
1. Pfurtscheller, G., Da Silva, F.L., 1999. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiology, 110(11), p. 1842-1857.
2. Allison, B.Z., Neuper, C., 2010. Could anyone use a BCI? Brain-Computer Interfaces (p. 35-54). Springer, London.
3. Cho, H., Ahn, M., Ahn, S., Kwon, M., Jun, S.C., 2017. EEG datasets for motor imagery brain–computer interface. GigaScience, 6(7), p. gix034.
4. Lee, M.H., Kwon, O.Y., Kim, Y.J., Kim, H.K., Lee, Y.E., Williamson, J., Fazli, S., Lee, S.W., 2019. EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy. GigaScience, 8(5), p. giz002.

Speakers
avatar for James Bennett

James Bennett

PhD Candidate, Biomedical Engineering, University of Melbourne



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 02

9:00pm CEST

P92: Modelling the responses of ON and OFF retinal ganglion cells to infrared neural stimulation
James Begeng, Wei Tong, Michael R Ibbotson, Paul Stoddart, Tatiana Kameneva

Backup link (since the one below seems to be broken): https://swinburne.zoom.us/j/93169734624

Email me at jbegeng@swin.edu.au if you have difficulty connecting, or if you can't make this timeslot.

Retinal degenerative diseases such as retinitis pigmentosa and age-related macular degeneration cause progressive photoreceptor loss leading to partial or total patient blindness. Retinal prostheses attempt to obviate this loss of photoreceptors by direct stimulation of the underlying retinal ganglion cell (RGC) circuitry, and are capable of restoring limited visual sensation to blind patients. Because these devices typically inject current through implanted electrode arrays, their spatial resolution is significantly limited, and their capacity for selective stimulation of distinct RGC types has not yet been established. In particular, selective stimulation of ON and OFF RGCs (which exhibit opposite light responses in vivo) constitutes a long- standing open problem in retinal prosthesis design.

Infrared neural modulation (INM) uses pulsed infrared light to deliver sharp thermal transients to neural tissue, and is capable of both neural stimulation and inhibition with a high spatial precision. This technique relies on at least two distinct mechanisms: a temperature gradient dependent capacitive current, and thermosensitive activation of the TRPV ion channels. For retinal prostheses, this high stimulus resolution offers an attractive alternative to the low resolution of current electrical prostheses; however, it is unclear how infrared-evoked currents may vary between the wide variety of RGC types in mammalian retina, or whether these differences may be harnessed for selective stimulation.

In this study, a single-compartment Hodgkin-Huxley-type model was simulated in a NEURON environment. The model included leak, sodium, potassium, calcium and low voltage activated calcium currents based on published data [1,2]. Thermally-evoked currents were simulated by a dT/dt dependent capacitive current based on GCS theory of bilayer capacitance [3].

Our results show that INM responses differ between ON and OFF RGCs. In particular, OFF cells have a prolonged depolarisation in response to millisecond timescale heat pulses, whilst ON cells exhibit a short depolarisation with a larger post-pulse hyperpolarisation. This difference is mainly due to the low voltage activated calcium current that is present in OFF and absent in ON RGCs. This prediction is yet to be confirmed experimentally, but may have important implications for the development of infrared retinal prostheses.


Old link, do not use: https://swinburne.zoom.us/j/91916507848?pwd=L1dqaVZwNkEvd21MU2UvalREeFVUQT09 Password: 930274

References

[1] Fohlmeister, J. F., & Miller, R. F. (1997a). Impulse encoding menchanisms of ganglion cells in the tiger salamander retina. Journal of Neurophysiology, 78, 1935–1947.

[2] Wang, X.-J., Rinzel, J., & Rogawski, M. (1991). A model of the T-type calcium current and the low-threshold spike in thalamic neurons. Journal of Neurophysiology, 66, 839–850.

[3] Eom, K., Byun, K.B., Jun, S.B., Kim, S.J., Lee, J. (2018) Theoretical Study on Gold-Nanorod-Enhanced Near-Infrared Neural Stimulation. Biophysical Journal, 115, 1481–1497

Speakers
JB

James Begeng

Faculty of Science, Engineering and Technology, Swinburne University of Technology



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 05

9:00pm CEST

P93: Investigation of Stimulation Protocols in Transcutaneous Vagus Nerve Stimulation (tVNS)
Please feel free to get in touch if you have any questions or would like to discuss things further!

ckeatch@swin.edu.au

Zoom Link: https://swinburne.zoom.us/j/97069565624?pwd=dm1nQWYyYUcwVkdyNkJQaEwwa2RiQT09
Password: 575701

Charlotte Keatch
, Paul Stoddart, Elisabeth Lambert, Will Woods, Tatiana Kameneva

Transcutaneous vagus nerve stimulation (tVNS) is a type of non-invasive brain stimulation that is used increasingly in the treatment of a number of different health conditions such as epilepsy and depression. Although there is a great deal of research into different medical conditions that can be improved by tVNS there is little conclusive evidence into the optimal stimulation parameters, such as stimulation frequency, pulse type or amplitude. Understanding whether variation of these stimulation parameters can directly influence the brain response could improve treatment delivery and lead to a customised approach to therapy.

The aim of this project is to use MEG imaging to determine whether tVNS leads to a direct brain response, and whether varying the stimulation parameters of tVNS can influence the induced brain response. 

Twenty healthy participants were selected based on their suitability for both magnetoencephalography (MEG) and magnetic resonance imaging (MRI) based on predetermined exclusion criteria. The experimental sessions were carried out at the Swinburne Imaging Facility, Swinburne University of Technology. Four different stimulation protocols were delivered via electrical stimulation to the left ear; active stimulation to the cymba concha at stimulation frequency of 24 Hz regular pulses, sham stimulation to the ear lobe at stimulation frequency of 24 Hz regular pulses, stimulation to the cymba concha at stimulation frequency of 1 Hz regular pulses, and stimulation to the cymba concha at stimulation frequency of 24 Hz pulse frequency modulated (PFM) pulses (modulated at 6 Hz).

Participant brain dynamics were analysed in response to stimulation through different signal processing techniques. First the raw data was passed through the software MaxFilter which uses Signal Space Separation (SSS) of Maxwell’s equations to remove major sources of noise and artifacts. The stimulation artifact was then removed from the data by spline interpolation, which removed part of the data from the onset of the stimulation pulse and then interpolated to reconstruct the signal. The data was then downsampled and filtered before applying Fast Fourier Transforms (FFT) to obtain power spectrums at sensor level. The response to different protocols could be contrasted by taking ratios for all participants and was then averaged to see group response at sensor level.

Preliminary results suggest that the brain does respond differently to different stimulation frequencies of tVNS. Comparison of the sham and active site 24 Hz stimulation shows that tVNS does elicit a brain response, but the presence of a stimulation artifact on the left side of the brain (at the site of stimulation) suggests that both our method and placement of the sham stimulation electrode could be improved. Futhermore, comparison of the 1 Hz and 24 Hz stimulation proves that the brain does respond differently to different stimulation frequencies, with the negative activity suggesting either an stronger response to the 24 Hz, possibly due to the larger amount of energy in the 24 Hz stimulation or that the 1 Hz stimulation elicits an inhibitory brain response. Similarly, comparisons between the PFM and 24 Hz regular pulses shows that the brain responds strongly to the PFM in the 10 - 14 Hz frequency band, which suggests that the brain is responding to harmonics of modulated carrier frequency. This gives evidence to the idea that the brain can be directly driven by different stimulation frequencies which could lead to customised treament approaches being developed in the application of tVNS for different medical conditions.

Speakers
CK

Charlotte Keatch

Biomedical Engineering, Swinburne University of Technology



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 06

9:00pm CEST

P97: Multiscale simulations of ischemia and spreading depolarization with NEURON

Adam JH Newton, Craig Kelley, Michael L Hines, William W Lytton, Robert A McDougal

Meeting:
https://yale.zoom.us/j/5299709870        Telephone: 646 568 7788        Meeting ID: 529 970 9870
Poster: http://adamnewton.org/CNS2020.pdf
Workshop: 
W4 S6: The NEURON Simulator

Recent improvements and performance enhancements in the NEURON (neuron.yale.edu) reaction-diffusion module (rxd) allow us to model multiple relevant concentrations in the intracellular and extracellular space. The extracellular space is a coarse-grained macroscopic model based on a volume averaging approach, allowing the user to specify both the free volume fraction (the proportion of space in which species are able to diffuse) and the tortuosity (the average multiplicative increase in path length due to obstacles). These tissue characteristics can be spatially dependent to account for regional or pathological differences.

** **

Using a multiscale modeling approach we have developed a pair of models for spreading depolarization at spatial scales from microns to mm, and time scales from ms to minutes. The cellular/subcellular-scale model adapted existing mechanisms for a morphologically detailed CA1 pyramidal neuron together with a simple astrocyte model. This model included reaction-diffusion of K+ , Na+, Cl− and glutamate, with detailed cytosolic and endoplasmic reticulum Ca2+ regulation. Homeostatic mechanisms were added to the model, including; Na-K- ATPase pumps, Ca2+ pumps, SERCA, NKCC1, KCC2 and glutamate transporters. We use BluePyOpt to perform a parameter search, constrained by the requirements of realistic electrophysiological responses while maintaining ionic homeostasis. This detailed model was used to explore the hypothesis that individual dendrites have distinct vulnerability to damage due to area-volume ratios leading to different intracellular Ca2+ levels.

** **

At the tissue-scale we adapted a simpler point neurons model, and densely packed them in a coarse-grained macroscopic 3D volume. The models include a simple model for oxygen and dynamic changes in volume fraction. This allows us to model the effect of changes in tissue diffusion characteristics on the wave propagation during spreading depolarization.

Acknowledgments: Research supported by NIH grant R01MH086638

Speakers
avatar for Adam Newton

Adam Newton

SUNY Downstate Health Sciences University



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 15

10:00pm CEST

Discussion with Zhaoping Li
Open discussion with Zhaoping Li. Ask your questions that remained unanswered during the Keynote talk.

Speakers
avatar for Li Zhaoping

Li Zhaoping

Prof. and head of department, University of Tuebingen, Germany
More info:   Bio: http://www.lizhaoping.org/zhaoping/bio.html  Positions in my group: http://www.lizhaoping.org/jobs.html  publications: http://www.lizhaoping.org/zhaoping/allpaper.html  List of other video lectures: http://www.lizhaoping.org/zhaoping/VideoLectures_ByZhaoping.html... Read More →


Sunday July 19, 2020 10:00pm - 11:00pm CEST
Crowdcast
  Keynote Speaker Forum
  • Moderator Steven Prescott; Anand Pathak; R. Janaki

11:00pm CEST

CNS Party
Click here to view the instructions on joining the CNS*2020 Party!

The banquet for the very first CNS meeting in 1992 was held in the Exploratorium, San Francisco’s world-famous hands-on science museum. The eventual bill for that banquet almost cost John Miller his house, but the event itself lives on in legend.
With this year’s CNS meeting being entirely on-line, we thought it would be interesting to, once again, return the banquet to a hands-on science venue, this time virtually, using Whyville.net. Whyville.net is one of the oldest and largest game-based learning spaces on the internet, launched in 1999, and currently with more than 8.5 MM registered users worldwide. Whyville is a product of Numedeon Inc, a company that Jim Bower established specifically to explore the use of the internet and simulation-based technology for science education 22 years ago. Within Whyville there are more than 150 games and activities across a wide range of subjects, many involving STEM education.
More information on how to join will follow.

Moderators
Sunday July 19, 2020 11:00pm - Monday July 20, 2020 2:00am CEST
Party: Whyville!
 
Monday, July 20
 

1:00pm CEST

F3: Neuronal morphology imposes a tradeoff between stability, accuracy and efficiency of synaptic scaling
Adriano Bellotti, Saeed Aljaberi, Fulvio Forni, Timothy O'Leary

Synaptic scaling is a homeostatic normalization mechanism that preserves relative synaptic strengths by adjusting them with a common factor. This multiplicative change is believed to be critical, since synaptic strengths are involved in learning and memory retention. Further, this homeostatic process is thought to be crucial for neuronal stability, playing a stabilizing role in otherwise runaway Hebbian plasticity [1-3]. Synaptic scaling requires a mechanism to sense total neuron activity and globally adjust synapses to achieve some activity set-point [4]. This process is relatively slow, which places limits on its ability to stabilize network activity [5]. Here we show that this slow response is inevitable in realistic neuronal morphologies. Furthermore, we reveal that global scaling can in fact be a source of instability unless responsiveness or scaling accuracy are sacrificed.

** **

A neuron with tens of thousands of synapses must regulate its own excitability to compensate for changes in input. The time requirement for global feedback can introduce critical phase lags in a neuron’s response to perturbation. The severity of phase lag increases with neuron size. Further, a more expansive morphology worsens cell responsiveness and scaling accuracy, especially in distal regions of the neuron. Local pools of reserve receptors improve efficiency, potentiation, and scaling, but this comes at a cost. Trafficking large quantities of receptors requires time, exacerbating the phase lag and instability. Local homeostatic feedback mitigates instability, but this too comes at the cost of reducing scaling accuracy.

** **

Realization of the phase lag instability requires a unified model of synaptic scaling, regulation, and transport. We present such a model with global and local feedback in realistic neuron morphologies (Fig. 1). This combined model shows that neurons face a tradeoff between stability, accuracy, and efficiency. Global feedback is required for synaptic scaling but favors either system stability or efficiency. Large receptor pools improve scaling accuracy in large morphologies but worsen both stability and efficiency. Local feedback improves the stability-efficiency tradeoff at the cost of scaling accuracy. This project introduces unexplored constraints on neuron size, morphology, and synaptic scaling that are weakened by an interplay between global and local feedback.

** **

Acknowledgements

The authors are supported by European Research Council Grant FLEXNEURO (716643) as well as Abu Dhabi National Oil Company, NIH OxCam Scholars program, and Gates Cambridge Trust

References

1. Royer, Sébastien, and Denis Paré. "Conservation of total synaptic weight through balanced synaptic depression and potentiation." Nature 422, no. 6931 (2003): 518-522. 2. Chen, Jen-Yung, et al. "Heterosynaptic plasticity prevents runaway synaptic dynamics." Journal of Neuroscience 33, no. 40 (2013): 15915-15929. 3. Chistiakova, Marina, et al. "Homeostatic role of heterosynaptic plasticity: models and experiments." Frontiers in computational neuroscience 9 (2015): 89. 4. Turrigiano, Gina G. "The self-tuning neuron: synaptic scaling of excitatory synapses." Cell 135, no. 3 (2008): 422-435. 5. Zenke, Friedemann, Guillaume Hennequin, and Wulfram Gerstner. "Synaptic plasticity in neural networks needs homeostasis with a fast rate detector." PLoS computational biology 9, no. 11 (2013).

Speakers
AB

Adriano Bellotti

Department of Engineering, University of Cambridge


Monday July 20, 2020 1:00pm - 1:40pm CEST
Crowdcast
  Featured Talk, Neurons to Circuits
  • Moderator Annalisa Scimemi; Tatiana Kameneva

1:40pm CEST

O8: Finite element simulation of ionic electrodiffusion in cellular geometries
Ada Johanne Ellingsrud

Electrical conduction in brain tissue is commonly modeled using classical bidomain models. These models fundamentally assume that the discrete nature of brain tissue can be represented by homogenized equations where the extracellular space, the cell membrane, and the intracellular spare are continuous and exist everywhere. Consequently, they do not allow simulations highlighting the effect of a nonuniform distribution of ion channels along the cell membrane or the complex morphology of the cells. In this talk, we present a more accurate framework for cerebral electrodiffusion with an explicit representation of the geometry of the cell, the cell membrane and the extracellular space. To take full advantage of this framework, a numerical solution scheme capable of efficiently handling three-dimensional, complicated geometries is required. We propose a novel numerical solution scheme using a mortar finite element method, allowing for the coupling of variational problems posed over the non-overlapping intra and extracellular domains by weakly enforcing interface conditions on the cell membrane. This solution algorithm flexibly allows for arbitrary geometries and efficient solution of the separate subproblems. Finally, we study ephaptic coupling induced in an unmyelinated axon bundle and demonstrate how the presented framework can give new insights in this setting. Simulations of 9 idealized, tightly packed axons show that inducing action potentials in one or more axons yields ephaptic currents that have a pronounced excitatory effect on neighboring axons, but fail to induce action potentials there [1].

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement 714892 (Waterscales), and from the Research Council of Norway (BIOTEK2021 Digital Life project ‘DigiBrain’, project 248828).

[1] Ellingsrud A J, Solbrå A, Einevoll G T, et al. Finite element simulation of ionic electrodiffusion in cellular geometries. arXiv.org. 2019.

Speakers
AJ

Ada Johanne Ellingsrud

PhD student, Simula Research Laboratory


Monday July 20, 2020 1:40pm - 2:00pm CEST
Crowdcast
  Oral, Neurons to Circuits
  • Moderator Annalisa Scimemi; Tatiana Kameneva

2:00pm CEST

O9: Discovering synaptic mechanisms underlying the propagation of cortical activity: A model-driven experimental and data analysis approach
Heidi Teppola, Jugoslava Acimovic, Marja-Leena Linne

Spontaneous, synchronized activity is a well-established feature of cortical networks in vitro and in vivo. The landmark of this activity is the repetitive emergence of bursts propagating across networks as spatio-temporal patterns. Cortical bursts are governed by excitatory and inhibitory synapses via AMPA, NMDA and GABAa receptors. Although spontaneous activity is a well known phenomenon in developing networks, its specific underlying mechanisms in health and disease are not fully understood. In order to study the synaptic mechanisms regulating the propagation of cortical activity it is important to combine the experimental wet-lab studies with in silico modeling and build detailed, realistic, computational models of cortical network activity. Moreover, experimental studies and analysis of microelectrode array (MEA) data are not typically designed to support computational modeling. We show here how the synaptic AMPA, NMDA and GABAa receptors shape the initiation, propagation and termination of the cortical burst activity in rodent networks in vitro and in silico and develop model-driven data analysis workflow to support the development of spiking and biophysical network models in silico [1].

We created a model-driven data analysis workflow with multiple steps to examine the contributions of synaptic receptors to burst dynamics both in vitro and in silico neuronal networks (Fig.1). First, the cortical networks were prepared from the forebrains of the postnatal rats and maintained on MEA plates. Second, network-wide activity was recorded by MEA technique under several pharmacological conditions of receptor antagonists. Third, multivariate data analysis was conducted in a way that supports both neurobiological questions as well as the fitting and validation of computational models to quantitatively produce the experimental results. Fourth, the computational models were simulated with different parameters to test putative mechanisms responsible for network activity.

The experimental results obtained in this study show that AMPA receptors initiate bursts by rapidly recruiting cells whereas NMDA receptors maintain them. GABAa receptors inhibit the spiking frequency of AMPA receptor-mediated spikes at the onset of bursts and attenuate the NMDA receptor-mediated late phase. These findings highlight the importance of both excitatory and inhibitory synapses in activity propagation and demonstrate a specific interaction between AMPA and GABAa receptors for fast excitation and inhibition. In the presence of this interaction, the spatio-temporal propagation patterns of activity are richer and more diverse than in its absence. Moreover, we emphasize the systematic data analysis approach with model-driven workflow throughout the study for comparison of results obtained from multiple in vitro networks and for validation of data-driven model development in silico. A well-defined workflow can reduce the amount of biological experiments, promote more reliable and efficient use of the MEA technique, and improve the reproducibility of research. It helps reveal in detail how excitatory and inhibitory synapses shape cortical activity propagation and dynamics in rodent networks in vitro and in silico.

Reference

[1] Teppola H, Aćimović J, Linne M-L. Unique features of network bursts emerge from the complex interplay of excitatory and inhibitory receptors in rat neocortical networks. Front Cell Neurosci. 2019,13(377):1-22.

Speakers
avatar for Heidi Teppola

Heidi Teppola

Doctoral student, Faculty of Medicine and Health Technology, Tampere University



Monday July 20, 2020 2:00pm - 2:20pm CEST
Crowdcast
  Oral, Neurons to Circuits
  • Moderator Annalisa Scimemi; Tatiana Kameneva

2:20pm CEST

O10: Neural flows: estimation of wave velocities and identification of singularities in 3D+t brain data
Paula Sanz-Leon, Leonardo L Gollo, James A Roberts

**Background.** Neural activity organizes in constantly evolving spatiotemporal patterns of activity, also known as brain waves (Roberts et al., 2019). Indeed, wave-like patterns have been observed across multiple neuroimaging modalities and across multiple spatiotemporal scales (Muller et al., 2016; Contreras et al. 1997; Destexhe et al. 1999). However, due to experimental constraints most attention has thus far been given to localised wave dynamics in the range of micrometers to a few centimeters, rather than at the global or large-scale that would encompass the whole brain. Existing toolboxes (Muller et al., 2016; Townsend et al., 2018) are geared particularly for 2D spatial domains (e.g., LFPs or VSDs on structured rectangular grids). No tool exists to study spatiotemporal waves naturally unfolding in 3D+t as recorded with different non-invasive neuroimaging techniques (e.g, EEG, MEG, and fMRI). In this work, we present results of using our toolbox neural flows (shown in Fig. 1).

**Methods and Results.** Our toolbox handles irregularly sampled data such as those produced via brain network modelling (Sanz-Leon et al., 2015; Breakspear, 2017) or source-reconstructed M/EEG, and regularly sampled data such as voxel-based fMRI. The toolbox performs the following steps: 1) Estimation of neural flows (Destexhe et al. 1999; Townsend et al., 2018; Sanz- Leon et al. 2020). 2) Detection of 3D singularities (i.e., points of vanishing flow). 3) Classification of 3D singularities. In that regard, the key flow singularities detected so far had been sources and sinks (from where activity emerges and vanishes, respectively), but no methods or tools existed to detect 3D saddles (around which activity is redirected to other parts of the brain). 4) Quantification of singularity statistics. 5) Finally, modal decomposition of neural flow dynamics. This decomposition allows for the detection and prediction of the most common spatiotemporal patterns of activity found in empirical data.

**Conclusions.** Representation of neural activity based on singularities (commonly known as critical points) is essentially a dimensionality reduction framework to understand large-scale brain dynamics. The distribution of singularities in physical space allows us to simplify the complex structure of flows into areas with similar dynamical behavior (e.g., fast versus slow, stagnant, laminar, or rotating). For modelling work, this compact representation allows for an intuitive and systematic understanding of the effects of various parameters in brain network dynamics such as spatial heterogeneity, lesions and noise. For experimental work, neural flows enable a rational understanding of large-scale brain dynamics directly in anatomical space which facilitates the interpretation and comparison of results across multiple modalities. Toolbox capabilities are presented in the accompanying figure. Watch this space for the open-source code: [ https://github.com/brain- modelling-group](https://github.com/brain-modelling-group)

References

Contreras et al. 1997 _J. Neurosci. 17, 1179-1196. _ Destexhe et al. 1999 J. Neurosci. _19 (11) 4595-4608. _ Muller et al., 2016 _eLife 5:e17267. _ Roberts et al., 2019 _Nat. Commun. 5;10(1):1056. _ Townsend et al., 2018 _PLoS Comput Biol_. 2018;14(12):e1006643. Sanz-Leon et al. 2020 _Neuroimage toolbox_ \- in preparation

Speakers
avatar for Paula Sanz-Leon

Paula Sanz-Leon

Senior Research Officer, QIMR Berghofer


Monday July 20, 2020 2:20pm - 2:40pm CEST
Crowdcast
  Oral, Neurons to Circuits
  • Moderator Annalisa Scimemi; Tatiana Kameneva

3:00pm CEST

K3: Information and Decision-Making
Daniel Polani

Neurostars discussion

In recent years it has become increasingly clear that (Shannon) information is a central resource for organisms, akin in importance to energy. Any decision that an organism or a subsystem of an organism takes involves the acquisition, selection, and processing of information and ultimately its concentration and enaction. It is the consequences of this balance that will occupy us in this talk. This perception-action loop picture of an agent's life cycle is well established and expounded especially in the context of Fuster's sensorimotor hierarchies. Nevertheless, the information-theoretic perspective drastically expands the potential and predictive power of the perception-action loop perspective. On the one hand information can be treated - to a significant extent - as a resource that is being sought and utilized by an organism. On the other hand, unlike energy, information is not additive. The intrinsic structure and dynamics of information can be exceedingly complex and subtle; in the last two decades one has discovered that Shannon information possesses a rich and nontrivial intrinsic structure that must be taken into account when informational contributions, information flow or causal interactions of processes are investigated, whether in the brain or in other complex processes. In addition, strong parallels between information and control theory have emerged. This parallelism between the theories allows one to obtain unexpected insights into the nature and properties of the perception-action loop. Through the lens of information theory, one can not only come up with novel hypotheses about necessary conditions for the organization of information processing in a brain, but also with constructive conjectures and predictions about what behaviours, brain structure and dynamics and even evolutionary pressures one can expect to operate on biological organisms, induced purely by informational considerations.

Speakers
DP

Daniel Polani

Professor, University of Hertfordshire


Monday July 20, 2020 3:00pm - 4:00pm CEST
Crowdcast
  Keynote
  • Moderator Dieter Jaeger; Anand Pathak

4:20pm CEST

F4: Who can turn faster? Comparison of the head direction circuit of two species
Ioannis Pisokas, Stanley Heinze, Barbara Webb

Ants, bees and other insects have the ability to return to their nest or hive using a navigation strategy known as path integration. Similarly, fruit flies employ path integration to return to a previously visited food source. An important component of path integration is the ability of the insect to keep track of its heading relative to salient visual cues. A highly conserved brain region known as the central complex has been identified as being of key importance for the computations required for an insect to keep track of its heading. However, the similarities or differences of the underlying heading tracking circuit between species are not well understood. We sought to address this shortcoming by using reverse engineering techniques to derive the effective underlying neural circuits of two evolutionary distant species, the fruit fly and the locust. Our analysis revealed that regardless of the anatomical differences between the two species the essential circuit structure has not changed. Both effective neural circuits have the structural topology of a ring attractor with an eight-fold radial symmetry (Fig. 1). However, despite the strong similarities between the two ring attractors, there remain differences. Using computational modelling we found that two apparently small anatomical differences have significant functional effect on the ability of the two circuits to track fast rotational movements and to maintain a stable heading signal. In particular, the fruit fly circuit responds faster to abrupt heading changes of the animal while the locust circuit maintains a heading signal that is more robust to inhomogeneities in cell membrane properties and synaptic weights. We suggest that the effects of these differences are consistent with the behavioural ecology of the two species. On the one hand, the faster response of the ring attractor circuit in the fruit fly accommodates the fast body saccades that fruit flies are known to perform. On the other hand, the locust is a migratory species, so its behaviour demands maintenance of a defined heading for a long period of time. Our results highlight that even seemingly small differences in the distribution of dendritic fibres can have a significant effect on the dynamics of the effective ring attractor circuit with consequences for the behavioural capabilities of each species. These differences, emerging from morphologically distinct single neurons highlight the importance of a comparative approach to neuroscience.

References

1. Heinze S, Homberg U. Maplike Representation of Celestial E-Vector Orientations in the Brain of an Insect. Science. 2007, 315(5814), 995–997. 2. Kim S S, Rouault H, Druckmann S, Jayaraman V. Ring attractor dynamics in the Drosophila central brain. Science. 2017, 356(6340), 849–853. 3. Neuser K, Triphan T, Mronz M, Poeck B, Strauss R. Analysis of a spatial orientation memory in Drosophila. Nature. 2008, 453(7199), 1244–1247. 4. Pisokas I, Heinze S, Webb B. The head direction circuit of two insect species. bioRxiv. 2019. 5. Pfeiffer K, Homberg U. Organization and Functional Roles of the Central Complex in the Insect Brain. Annual Review of Entomology. 2013, 59(1),165–184. 6. Wolff T, Rubin G M. Neuroarchitecture of the Drosophila central complex: A catalog of nodulus and asymmetrical body neurons and a revision of the protocerebral bridge catalog. Journal of Comparative Neurology. 2018, 526(16), 2585–2611.

Speakers
avatar for Ioannis Pisokas

Ioannis Pisokas

School of Informatics, University of Edinburgh


Monday July 20, 2020 4:20pm - 5:00pm CEST
Crowdcast
  Featured Talk, Circuits in Action
  • Moderator Julie Haas

5:00pm CEST

O11: Experimental and computational characterization of interval variability in the sequential activity of the Lymnaea feeding CPG
Alicia Garrido-Peña, Irene Elices, Rafael Levi, Francisco B Rodriguez, Pablo Varona

Central Pattern Generators (CPG) generate and coordinate motor movements by producing rhythms composed of patterned sequences of activations in their constituent neurons. These robust rhythms are yet flexible and the time intervals that build the neural sequences can adapt as a function of the behavioral context. We have recently revealed the presence of robust dynamical invariants in the form of cycle-by-cycle linear relationships between two specific intervals of the crustacean pyloric CPG sequence and the period [1]. Following the same strategy, the present work characterizes the intervals that build the rhythm and the associated sequence of the feeding CPG of the mollusk Lymnaea Stagnalis. The study entails both the activity obtained in electrophysiological recordings of living neurons and the rhythm produced by a realistic conductance-based model. The analysis reported here first assesses the quantification of the variability of the intervals and the characterization of relationships between the intervals that build the sequence and the period, which allows the identification of dynamical invariants. To induce variability in the CPG model, we use current injection ramps in individual CPG neurons following the stimulation used in experimental recordings in [2]. Our work extends previous analyses characterizing the Lymnaea feeding CPG rhythm from experimental recordings and from modeling studies by considering all intervals that build the sequence [3]. We report the presence of distinct variability in the sequence time intervals and the existence of dynamical invariants, which depend on the neuron being stimulated. The presence of dynamical invariants in CPG sequences, not only in the model but also in two animal species, points out the universality of this phenomena.

Acknowledgements

We acknowledge support from AEI/FEDER PGC2018-095895-B-I00 and TIN2017-84452-R.

References

1\. Elices, I., Levi, R., Arroyo, D., Rodriguez, F. B., and Varona, P. (2019). Robust dynamical invariants in sequential neural activity. _Sci. Rep._ 9, 9048. doi:10.1038/s41598-019-44953-2.

2\. Elliott, C. J., and Andrew, T. (1991). Temporal analysis of snail feeding rhythms: a three-phase relaxation oscillator. _J. Exp. Biol._ 157, 391 LP – 408.

3\. Vavoulis, D. V, Straub, V. A., Kemenes, I., Kemenes, G., Feng, J., and Benjamin, P. R. (2007). Dynamic control of a central pattern generator circuit: a computational model of the snail feeding network. _Eur. J. Neurosci._ 25, 2805–2818. doi:10.1111/j.1460-9568.2007.05517.x.

Speakers
avatar for Alicia Garrido-Pena

Alicia Garrido-Pena

PhD student, GNB, Universidad Autónoma de Madrid



Monday July 20, 2020 5:00pm - 5:20pm CEST
Crowdcast
  Oral, Circuits in Action
  • Moderator Julie Haas

5:40pm CEST

O12: A Spatial Developmental Generative Model of Human Brain Structural Connectivity
Stuart Oldham, Ben Fulcher, Kevin Aquino, Aurina Arnatkevičiūtė, Rosita Shishegar, Alex Fornito

The human connectome has a complex topology that is thought to enable adaptive function and behaviour. Yet the mechanisms leading to the emergence of this topology are unknown. Generative models can shed light on this question, by growing networks in silico according to specific wiring rules and comparing properties of model-generated networks to those observed in empirical data [1]. Models involving trade-offs between the metabolic cost and functional value of a connection can reproduce topological features of human brain networks at a statistical level, but are less successful in replicating how certain properties, most notably hubs, are spatially embedded [2,3]. A potential reason for this limited predictive ability is that current models assume a fixed geometry based on the adult brain, ignoring the major changes in shape and size that occur early in development, when connections form.

To address this limitation, we developed a generative model that accounts for developmental changes in brain geometry, informed by structural MRIs obtained from a public database of foetal scans acquired from 21–38 weeks gestational age [4]. We manually segmented the cortical surface of each brain and registered each surface to an adult template surface using Multimodal Surface Matching [5,6]. This procedure allowed us to map nodes to consistent spatial locations through development and measure how distances between nodes (a proxy for connectome wiring cost) change through development. We evaluated the performance of classic trade-off models [2] that either assume a fixed, adult brain geometry (static), or those where cost-value trade-offs dynamically change in accordance with developmental variations in brain shape and size (growth). We used connectomes generated from 100 healthy adults with diffusion MRI to benchmark model performance. Model fit was calculated by comparing model and empirical distributions of topological properties. An optimisation procedure was used to find the optimal parameters and best-fitting models for each individual adult brain network [2]. For fair comparison of model fit across models of varying parametric complexity, we used a leave-one out cross- validation procedure.

Spatial models (sptl; which include only distance information) produced poorer fits than those involving distance–topology trade-offs. Homophily models (matching , neighbors; where connections form between nodes with common neighbours) were among the best fitting. Growth models produced slightly better fits than static models overall. These results still generally held when the cross-validation procedure was employed (Fig. 1A). Neither growth nor static models reproduced the spatial topography of network hubs, but growth models are associated with a less centralized anatomical distribution of hubs across the brain, which is more consistent with the empirical data (Fig. 1B).

In summary, we introduce a new framework for examining how developmental changes in brain geometry influence brain connectivity. Our results suggest that while such changes influence network topology, they are insufficient to explain how complex connectivity patterns emerge in brain networks.

References: **[1]** Betzel R, Bassett D. _J. R. Soc. Interface_ 2017; 14. **[2]** Betzel R et al. _NeuroImage_ 2016; __ 124: 1054-64. **[3]** Zhang X et al. _BioRxiv_ 2019. **[4]** Gholipour A et al. _Sci Rep_ 2017; 7. **[5]** Robinson E et al (2014). _NeuroImage_ 100: 414-26. **[6]** Robinson E et al. _NeuroImage_ 2018; 167: 453-65.

Speakers
avatar for Stuart Oldham

Stuart Oldham

Monash University


Monday July 20, 2020 5:40pm - 6:00pm CEST
Crowdcast
  Oral, Structural and Functional Connectivity
  • Moderator Sacha van Albada; Ingo Bojak

6:00pm CEST

O13: Cortical integration and segregation explained by harmonic modes of functional connectivity
Katharina Glomb, Gustavo Deco, Morten L. Kringelbach, Patric Hagmann, Joel Pearson, Selen Atasoy

The idea that harmonic modes - basis functions of the Laplace operator - are meaningful buildingblocks of brain function are gaining attention [1–3]. We extracted harmonic modes from the HumanConnectome Project’s (HCP) dense functional connectivity (dFC), an average over 812 participants’resting state fMRI dFC matrices. In this case, harmonic modes give rise tofunctional harmonics.Each functional harmonic is a connectivity gradient [4] that is associated with a different spatialfrequency, and thus, functional harmonics provide a frequency-ordered, multi-scale, multi-dimensionaldescription of cortical functional organization.

We propose functional harmonics as an underlying principle of integration and segregation. Figure 1a shows 2 functional harmonics on the cortical surface. In harmonic 11 (ψ11), the two functional regions that correspond to the two hands are on opposite ends of the gradient (different colors on thesurface) and are thus functionally segregated. In contrast, in harmonic 7 (ψ7), the two areas are onthe same end of the gradient, and are thus integrated. This way, functional harmonics explain howtwo brain regions can be both functionally integrated and segregated, depending on the context.

Figure 1a illustrates how specialized areas emerge from the smooth gradients of functional harmonics: the two hand areas occupy well-separated regions of the space spanned by ψ7and ψ11. Thus, functional harmonics unify two perspectives, a view where the brain is organized in discrete modules,and one in which function varies gradually [4].

The borders drawn on the cortex correspond to functional areas in the HCP’s multimodal parcellation [5]. In this example, the isolines of the gradients of the functional harmonics follow the borders. We quantified how well, in general, the first 11 functional harmonics follow the borders of corticalareas by comparing the variability of the functional harmonics within and between the areas given by the HCP parcellation; i.e. we computed the silhouette value (SH), averaged over all 360 cortical areas. The SH lies between 0 and 1, where 1 means perfect correspondence between isolines and parcels. We found average SHs between 0.65 (ψ10) and 0.85 (ψ1), indicating a very good correspondence. Thus, functional harmonics capture the “modular perspective” of brain function.

On the other hand, several functional harmonics are found to capture topographic maps and thus, gradually varying function. One important example is retinotopic organization of the visual cortex. Figure 1b shows functional harmonic 8 (ψ8) as an example in which both angular and eccentricity gradients are present [6]. Topographic organization is also found in the somatosensory/motor cor-tex, known as somatotopy. This is shown in Figure 1a, where several somatotopic body areas are reproduced.

Taken together, our results show that functional specialization, topographic maps, and the multi-scale, multi-dimensional nature of functional networks are captured by functional harmonics, thereby connecting these empirical observations to the general mathematical framework of harmonic eigenmodes.

References

1\. S. Atasoyet al.,Nature communications7(2016).

2\. P. A. Robinsonet al.,NeuroImage142, 79–98 (2016).1

3\. P. Tewarieet al.,NeuroImage186, 211–220 (2019).

4\. D. S. Margulieset al.,Proceedings of the National Academy of Sciences113, 12574–12579 (2016).

5\. M. F. Glasseret al.,Nature536, 171–178 (2016).

6\. N. C. Bensonet al.,Journal of vision18, 23–23 (2018).

Speakers
avatar for Katharina Glomb

Katharina Glomb

Department of Radiology, Centre Hospitalier Universitaire Vaudois


Monday July 20, 2020 6:00pm - 6:20pm CEST
Crowdcast
  Oral, Structural and Functional Connectivity
  • Moderator Sacha van Albada; Ingo Bojak

6:20pm CEST

O14: Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data
Pedro Mediano, Fernando Rosas, Henrik Jensen, Anil Seth, Adam Barrett, Robin Carhart-Harris, Daniel Bor

The broad concept of emergence is instrumental in various key open scientific questions – yet, few quantitative theories of what constitutes emergent phenomena have been proposed. We introduce a formal theory of causal emergence in multivariate systems, which studies the relationship between the dynamics of parts of a system and macroscopic features of interest. Our theory provides a quantitative definition of downward causation, and introduces a complementary modality of emergent behaviour, which we refer to as causal decoupling. Moreover, we provide criteria that can be efficiently calculated in large systems, making the theory applicable in a range of practical scenarios. We illustrate our framework in a number of case studies, including Conway’s Game of Life and ECoG data from macaques during a reaching task, which suggest that the neural representation of motor behaviour may be causally decoupled from cortical activity.

Speakers
avatar for Pedro Mediano

Pedro Mediano

Post-doctoral researcher, Department of Psychology, University of Cambridge


Monday July 20, 2020 6:20pm - 6:40pm CEST
Crowdcast
  Oral, Structural and Functional Connectivity
  • Moderator Sacha van Albada; Ingo Bojak

7:00pm CEST

P104: Integrated cortical, thalamic and basal ganglia model of brain function: validation against functional requirements
Webex meeting: https://ibm.webex.com/meet/sebastien.a.naze

Sebastien Naze, James Kozloski 
Large scale brain models encompassing cortico-cortical, thalamo-cortical and basal ganglia processing are fundamental to understand the brain as an integrated system in healthy and disease conditions but are complex to analyze and interpret. Neuronal processes are typically segmented by region and modality in order to explain an experimental observation at a given scale and then integrated to a global framework (Eliasmisth & Trujillo, 2014). Here, we present a set of functional requirements applied to validate the recently developed IBEx model (Kozloski, 2016) against a learning task involving coordinated activity across cortical and sub-cortical regions in a brain- computer interface (BCI) context involving volitional control of a sensory stimulus (Koralek et al., 2012). The original IBEx model comprises interacting modules for supra-granular, infra-granular cortical layers, thalamic integration, basal ganglia parallel processing and dopamine-mediated reinforcement learning. We decompose and analyze each subsystem in the context of the BCI learning task whereby parameters are tuned to comply to its functional requirements. Intermediate conclusions are presented for each subsystem according to the constraints imposed to satisfy the requirements, before re-incorporating the subsystem in the global framework. Consequences of model modifications and parameter tuning are assessed at the scales of the subsystem and the whole brain system. The relation between infra-granular spiking activity in different cortical regions, thalamo-cortical delta rhythms and higher level description of cognitive or motor trajectories (according to the brain region) is displayed. The relation to phenotypes associated to Huntington's disease is exposed and the framework is discussed in perspective to other state-of-art integrative efforts to understand complex high-order brain functions (Oizumi et al., 2014; Mashour et al., 2020).C. Eliasmith and O. Trujillo (2014) The use and abuse of large-scale brain models. Current Opinion in Neurobiology.
A. C. Koralek, X. Jin, J. D. Long Ii, R. M. Costa, and J. M. Carmena (2012) Corticostriatal plasticity is necessary for learning intentional neuroprosthetic skills. Nature.
J. Kozloski (2016) Closed-Loop Brain Model of Neocortical Information-Based Exchange. Frontiers in neuroanatomy.
G. A. Mashour, P. Roelfsema, J.-P. Changeux, and S. Dehaene (2020) Conscious Processing and the Global Neuronal Workspace Hypothesis. Neuron.
M. Oizumi, L. Albantakis, and G. Tononi (2014) From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLOS Computational Biology.

Speakers
avatar for Sébastien Naze

Sébastien Naze

Research Scientist, IBM Research
Large scale brain networks    /    Neural oscillations    /    Consciousness    /   Epilepsy    /    Huntington's disease



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 10

7:00pm CEST

P110: Synchronization and resilience in the Kuramoto white matter network model with adaptive state-dependent delays
Daniel Park, Jeremie Lefebvre

Myelin sheaths around axonal lengths are formed by mature oligodendrocytes, and play a critical part in regulating signal transmission in the nervous system. Contrary to traditional assumptions, recent experiments have revealed that myelin remodels itself in an activity-dependent way, during both developmental stages and well into adulthood in mammalian subjects. Indeed, it has shown that myelin structure is affected by extrinsic factors such as one's social environment and intensified learning activity. As a result, axonal conduction delays continuously adjust in order to regulate the timing of neural signals propagating between different brain regions. While there is strong empirical support for such phenomena, the plasticity mechanism has yet to be extensively modeled in neurocomputational fields. As a preliminary step, we incorporate adaptive myelination in the form of state-dependent delays into neural network models, and analyze how it consequently alters its dynamics. In particular, we ask what role myelin plasticity plays in brain synchrony, which is a fundamental element of neurological function. Brain synchrony is simplistically represented in coupled phase-oscillator models such as the Kuramoto network model. As a prototype, we equip the Kuramoto model with a distribution of variable delays governed by a plasticity rule with phase difference gain that allows the delays and oscillatory phases to evolve over time with mutually dependent dynamics. We analyzed the equilibria and stability of this system, and applied our results to large dimensional networks. Our joint mathematical and numerical analysis demonstrates that plastic delays act as a stabilizing mechanism promoting the network's ability to maintain synchronous activity. At a high-dimensional network level, our work also shows that global synchronization is more resilient to perturbations and injury towards network architecture. Specifically, our conducted numerical experiments imply that plastic delays play a positive role in improving a large-dimensional system's resilience in achieving synchrony from a sustained injury. Our results provide key insights about the analysis and potential significance of activity-dependent myelination in large-scale brain synchrony.


Zoom meeting details:

Time: Jul 20, 2020 01:00 PM Pacific Time (US and Canada)

Join Zoom Meeting: https://us04web.zoom.us/j/79681368783?pwd=ZVpFQjlHVFZ6ZHNEYkV2MzEwbTFVdz09

Speakers
DP

Daniel Park

University of Toronto



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 18

7:00pm CEST

P11: Effect of interglomerular inhibitory networks on olfactory bulb odor representations
Zoom Link
passcode: 622405

Daniel Zavitz
, Isaac Youngstrom, Matt Wachowiak, Alla Borisyuk

Lateral inhibition is a fundamental feature of circuits that process sensory information. In the mouse olfactory system, inhibitory interneurons called short axon cells initially mediate lateral inhibition between glomeruli, the functional units of early olfactory coding and processing. However, their interglomerular connectivity and its impact on odor representations is not well understood. To explore this question, we constructed a computational model of the interglomerular inhibitory network using detailed characterizations of short axon cell morphologies and simplified intraglomerular circuitry. We then examined how this network transformed glomerular patterns of odorant-evoked sensory input (taken from previously- published datasets) at different values of interglomerular inhibition selectivity. We examined three connectivity schemes: selective (each glomerulus connects to few others with heterogeneous strength), nonselective (glomeruli connect to most others with heterogeneous strength) or global (glomeruli connect to all others with equal strength). We found that both selective and nonselective interglomerular networks could mediate heterogeneous patterns of inhibition across glomeruli when driven by realistic sensory input patterns, but that global inhibitory networks were unable to produce input-output transformations that matched experimental data.

We further studied networks whose interglomerular connectivity was tuned by sensory input profile. We found that this network construction improved contrast enhancement as measured by decorrelation of odor representations. These results suggest that, despite their multiglomerular innervation patterns, short axon cells are capable of mediating odorant-specific patterns of inhibition between glomeruli that could, theoretically, be tuned by experience or evolution to optimize discrimination of particular odorants.

Speakers
avatar for Daniel Zavitz

Daniel Zavitz

Graduate Student, Department of Mathematics, University of Utah



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 02

7:00pm CEST

P120: Ageing-related changes in prosocial reinforcement learning and links to psychopathic traits
Google Meet link: https://meet.google.com/uhj-xmai-eye

Jo Cutler
, Marco Wittmann, Ayat Abdurahman, Luca Hargitai, Daniel Drew, Masud Husain, Patricia Lockwood

Prosocial behaviours, actions that help others, are vital for maintaining social bonds and are linked with improved health. However, our ability to learn which of our actions help others could change as we get older, as existing studies suggest declines in reinforcement learning across the lifespan [1]. This decline in associative learning could be explained by the significant age-related decrease in dopamine transmission [2] which has been suggested to code prediction errors [3]. Alternatively, prosocial learning might not only rely on learning abilities but also on the motivation to help others. This motivation, which is reduced in disorders such as psychopathy, might also shift with age, with a trend for lower levels of antisocial behaviour in older adults [4]. Interestingly, the decrease in dopamine levels in older adults could also support this hypothesis of increased prosociality, as higher dopamine has been linked to lower altruism [5].

Here, using computational modelling of a probabilistic reinforcement learning task (Fig. 1), we tested whether younger (age 18-36) and older (age 60-80, total n=152) adults can learn to gain rewards for themselves, another person (prosocial), or neither individual (control). We replicated existing work showing younger adults were faster to learn when their actions benefitted themselves, compared to when they helped others [6]. Strikingly however, older adults showed a reduced self-bias, compared to younger adults, with learning rates that did not significantly differ between self and other. In other words, older adults showed a relative increase in the willingness to learn about actions that helped others. Moreover, we find that these differences in prosocial learning could emerge from more basic changes in personality characteristics over the lifespan. In older adults, psychopathic traits were significantly reduced and correlated with the difference between prosocial and self learning rates. Importantly, the difference between self and other learning rate was most reduced in older people with the lowest psychopathic traits. Overall, we show that older adults are less self-biased than younger adults, and this change is associated with a decline in psychopathic traits. These findings highlight the importance of examining individual differences across development and have important implications for theoretical and neurobiological accounts of healthy ageing.

References

1\. Mell T, Heekeren HR, Marschner A, Wartenburger I, Villringer A, Reischies FM. Effect of aging on stimulus-reward association learning. Neuropsychologia. 2005;43(4):554-563.

2\. Li S-C, Lindenberger U, Bäckman L. Dopaminergic modulation of cognition across the life span. Neurosci Biobehav Rev. 2010;34(5):625-630.

3\. Schultz W. Dopamine reward prediction error coding. Dialogues Clin Neurosci. 2016;18(1):23-32.

4\. Gill DJ, Crino RD. The Relationship between Psychopathy and Age in a Non- Clinical Community Convenience Sample. Psychiatry Psychol Law. 2012;19(4):547-557.

5\. Crockett MJ, Siegel JZ, Kurth-Nelson Z, et al. Dissociable Effects of Serotonin and Dopamine on the Valuation of Harm in Moral Decision Making. Curr Biol. 2015;25(14):1852-1859.

6\. Lockwood PL, Apps MAJ, Valton V, Viding E, Roiser JP. Neurocomputational mechanisms of prosocial learning and links to empathy. Proc Natl Acad Sci. 2016;113(35):9763-9768.

Speakers
avatar for Jo Cutler

Jo Cutler

Postdoc, Psychology, University of Oxford
My research focuses on prosocial decision making. I am interested in questions like why are people altruistic? Which situations make people more prosocial? How do these decisions change across the lifespan?To answer these questions I use tools from psychology and neuroscience including... Read More →



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 16

7:00pm CEST

P121: Avalanches and emergent activity patterns in simulated mouse primary motor cortex
Donald Doherty, Salvador Dura-Burnal, William W Lytton

Join me on Google Meet
https://meet.google.com/wgs-xtca-vrg

Avalanches display non-Poisson distributions of correlated neuronal activity that may play a significant role in signal processing. The phenomenon appears robust but mechanisms remain unknown due to an inability to gather large sample sizes, and difficulties in identifying neuron morphology, biophysics, and connections underlying avalanche dynamics. We set out to understand the relationship between power-law activity patterns, their values, and the neural responses observed from every neuron across different layers, cell populations, and the entire cortical column using a detailed M1 model with 15 neuron types that simulated the full-depth of a 300μm diameter column with 10,073 neurons and ~18e6 connections. Self-organized and self-sustained activity from our simulations have power-law values of -1.51 for avalanche size and -1.98 for duration distributions, which are in the range noted in both in vitro and in vivo neural avalanche preparations reported by Beggs and Plentz (2003). We applied a 0.57 nA, 100 ms stimulus across 40 μm in diameter and full column depth at each of 49 gridded locations (40 μm) across the pia surface of our 400 μm diameter cylindrical cortical column. Stimuli applied to 4 locations (8.2%) produced no sustained responses. Self-sustained activity was seen in the other 45 locations, which always included activity in IT5B or IT5B and IT6. In 6 locations activity was restricted to IT5B or IT5B/IT6 alone (avalanche size: ~-2.8). Intermittent spread of activity from IT5B/IT6 across other neuron types and layers was seen in 24 locations (avalanche size: ~-2.0). In 15 locations, frequent spread of activity to other neuron types and layers was observed (avalanche size: ~-1.5). Avalanches were defined using binned spiking activity (1 ms bins). Each avalanche was composed of adjacent bins filled with one or more action potentials, preceded and followed by at least one empty bin. A prolonged 10 minute M1 simulation with different connectivity produced 15,579 avalanches during sustained activity after the initial 100 ms stimulation. Again, IT5B/IT6 activity was constant and punctuated by more widespread activity. Three distinct patterns of activity spontaneously recurred and could be characterized by delta, beta, or gamma frequency dominance. All large-scale avalanches were composed of 1 or a combination of these 3 recognizable patterns. Between the large-scale avalanches we saw three patterns of activity: 1) continuous IT5B and IT6 neuron activity, 2) vigorous layer 5 and IT6 activity, or 3) vigorous layer 5 and IT6 activity that transitioned to continuous IT5B and IT6 activity. Since cortical column activity with just IT5B and IT6 activity showed little correlation (very steep and narrow distributions of avalanche sizes and durations), we hypothesize that the addition of avalanches with layer 5 and IT6 activity (activity patterns 2 and 3 above) result in more correlated activity and power-law values closer to -1.51 and -1.98 for size and duration respectively. In conclusion, the increase in correlated activity among neuronal components parallels the emergence of clearly identifiable activity patterns across time and cortical layers and may generate rhythmic activity.


Speakers
avatar for Donald Doherty

Donald Doherty

Department of Physiology and Pharmacology, SUNY Downstate Medical Center



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 06

7:00pm CEST

P122: Adaptive activity-dependent myelination promotes synchronization in large-scale brain networks
Meeting Link: https://meet.google.com/xmb-amfr-wst
Link to Poster: https://drive.google.com/file/d/1ewTK5vBfvRCqpRjq2ufCJEZrSJdj0YhE/view?usp=sharing

Rabiya Noori, Daniel Park, John Griffiths, Sonya Bells, Paul Frankland, Donald Mabbott, Jeremie Lefebvre


Communication and oscillatory synchrony between distributed neural populations is believed to play a key role in multiple cognitive and neural functions. These interactions are mediated by long-range myelinated axonal fibre bundles, collectively termed as white matter. While traditionally considered to be static after development, white matter properties have been shown to change in an activity-dependent way through learning and behavior: a phenomenon known as white matter plasticity. In the central nervous system this plasticity stems from oligodendroglia, which form myelin sheaths to regulate the conduction of nerve impulses across the brain, hence critically impacting neural communication. We here shift the focus from neural to glial contribution to brain synchronization and examine the impact of adaptive, activity-dependent change in conduction velocity on the large-scale phase-synchronization of neural oscillators.

We used a network model built of reciprocally coupled Kuramoto phase oscillators whose connections are based on available primate large-scale white matter neuroanatomy data. Our computational and mathematical results show that such adaptive plasticity endows white matter networks with self-regulatory and self-organizing properties, where conduction delay statistics are autonomously adjusted to ensure efficient neural communication. Specifically, our analysis shows that adaptive conduction velocities along axonal connections stabilizes oscillatory neural activity across a wide range of connectivity gain and frequency bands. Resulting conduction delays become statistically similar, promoting phase-locking irrespective of the distances. As a corollary, global phase-locked states are more resilient to diffuse decreases in connectivity, reflecting damage caused by a neurological disease, for instance. Our work suggests that adaptive myelination may be a mechanism that enable brain networks with a means of temporal self-organization, resilience and homeostasis.

Speakers
RN

Rabiya Noori

Krembil Research Institute


Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 13

7:00pm CEST

P133: Recurrent neural networks trained in multisensory integration tasks reveal a diversity of selectivity and structural properties
Zoom link: https://uva-live.zoom.us/j/92846401275

  Amparo Gilhuis, Shirin Dora, Cyriel Pennartz, Jorge Mejias
The brain continuously processes sensory information from multiple modalities, giving rise to internal representations of the outside world. If and how the information from multiple modalities is being integrated has extensively been investigated over the past years, leading to more insight in multisensory integration (MSI) and its underlying mechanisms [1]. However, the different experimental paradigms used to investigate MSI involve different cognitive resources and situational demands. In this study, we investigated how different experimental paradigms of MSI reflect on behavior output and in their corresponding neural activity patterns. We did so by designing a recurrent neural network (RNN) with the biological plausible feature of differentiating between excitatory and inhibitory units [2]. For each of the three multisensory processing tasks considered [3, 4], an RNN was optimized to perform the tasks with similar performance as found in animals. Network models trained on different experimental paradigms showed significant distinct selectivity and connectivity patterns. Selectivity for both modality and choice was found in network models that were trained on the paradigm that involved higher cognitive resources. Network models trained on paradigms that involve more bottom-up processes mostly experienced choice selectivity. Increasing the level of network noise in network models that at first did not experience modality selectivity led to an increase in modality selectivity. We propose that a higher range of selectivity arises when a task is more demanding, either due to higher network noise (which makes the task harder for the animal) or a more difficult experimental paradigm. The higher range of selectivity is thought to improve the flexibility of the network model, which could be a necessity for the network models to achieve good performance, and the resulting neural heterogeneity could be used for more general information processing strategies [5, 6].

Acknowledgements

This work was partially supported by the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907.

References

1) Chandrasekaran C. Computational principles and models of multisensory integration. Curr. Op. Neurobiol. 2017, 43, 25-34.

2) Song HF, Yang GR, Wang X-J. Training excitatory-inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework. PloS Comput. Biol. 2016, 12, e1004792.

3) Raposo D, Kaufman MT, Churchland AK. A category-free neural population supports evolving demands during decision-making. Nat Neurosci. 2014, 17, 1784.

4) Meijer GT, Pie JL, Dolman TL, Pennartz CMA, Lansink CS. Audiovisual integration enhances stimulus detection performance in mice. Front. Behav. Neurosci. 2018, 12, 231.

5) Mejias JF, Longtin A. Optimal heterogeneity for coding in spiking neural networks. Phys. Rev. Lett. 2012, 108, 228102.

6) Mejias JF, Longtin A. Differential effects of excitatory and inhibitory heterogeneity on the gain and asynchronous state of sparse cortical networks. Front. Comput. Neurosci. 2014, 8, 107.

Speakers
avatar for Jorge Mejias

Jorge Mejias

Swammerdam Institute for Life Sciences, University of Amsterdam


Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 07

7:00pm CEST

P137: General anesthesia reduces complexity and temporal asymmetry of the informational structures derived from neural recordings in Drosophila
Roberto Munoz, Angus Leung, Aidan Zecevik, Felix Pollock, Dror Cohen, Bruno can Swinderen, Naotsugu Tsuchiya, Kavan Modi

We apply techniques from the field of computational mechanics to evaluate the statistical complexity of neural recording data from fruit flies. First, we connect statistical complexity to the flies’ level of conscious arousal, which is manipulated by general anaesthesia (isoflurane). We show that the complexity of even single channel time series data decreases under anaesthesia. The observed difference in complexity between the two states of conscious arousal increases as higher orders of temporal correlations are taken into account. We then go on to show that, in addition to reducing complexity, anaesthesia also modulates the informational structure between the forward and reverse-time neural signals. Specifically, using three distinct notions of temporal asymmetry we show that anaesthesia reduces temporal asymmetry on information-theoretic and information-geometric grounds. In contrast to prior work, our results show that: (1) Complexity differences can emerge at very short time scales and across broad regions of the fly brain, thus heralding the macroscopic state of anaesthesia in a previously unforeseen manner, and (2) that general anaesthesia also modulates the temporal asymmetry of neural signals. Together, our results demonstrate that anaesthetised brains become both less structured and more reversible.

Read about our study in detail here: https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.023219
Zoom Link: https://monash.zoom.us/j/8216197893

Speakers
avatar for Roberto Munoz

Roberto Munoz

School of Physics and Astronomy, Monash University



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 15

7:00pm CEST

P139: Stereotyped population dynamics in the medial entorhinal cortex

Join Zoom Meeting; https://NTNU.zoom.us/j/68463351705?pwd=UnN6N3lTVy9yRC9uMUhWcnlUSmREUT09
Meeting ID: 684 6335 1705
Passcode: 476100

Soledad Gonzalo Cogno
, Flavio Donato, Horst A. Obenhaus, R. Irene Jacobsen, May-Britt Moser, Edvard I. Moser

The medial entorhinal cortex (MEC) supports the brain’s representation of space with distinct cell types (grid, border, object-vector, head-directions and speed cells). Since no single sensory stimulus can faithfully predict the firing of these cells, attractor network models postulate that spatially-tuned firing emerges from specific connectivity motives. To determine how those motives constrain the self-organized activity in the MEC, we tested mice in a spontaneous locomotion task under sensory-deprived conditions, when activity likely is determined by the intrinsic structure of the network. Using 2-photon calcium imaging, we monitored the activity of large populations of MEC neurons in mice running on a wheel in darkness.

To reveal network dynamics we applied dimensionality reduction techniques to the spike matrix. This way we unveiled the presence of motifs that involve the sequential activation of neurons (“waves”). Waves lasted from tens of seconds to minutes, swept through the entire network of active cells and did not exhibit any anatomical organization. Waves were not found in spike-time- shuffled data. Furthermore, waves did not map the position of the mouse on the wheel and were not restricted to running epochs. Single neurons exhibited a wide range of locking degrees to the waves, indicating that the observed dynamics is a population effect rather than a single cell phenomenon. Overall, our results suggest that a large fraction of MEC-L2 neurons participates in common global dynamics that often takes the form of stereotyped waves. These activity patterns might couple the activity of neurons with distinct tuning characteristics in MEC.

Speakers
avatar for Soledad Gonzalo Cogno

Soledad Gonzalo Cogno

Postdoctoral fellow, Kavli Institute for Systems Neuroscience and Centre for Neural Computation, NTNU



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 14

7:00pm CEST

P13: Organization of connectivity between areas in the monkey frontoparietal network
Join via Google Meet at scheduled date/time: meet.google.com/qzb-siye-zki

preprint: https://www.biorxiv.org/content/10.1101/2020.06.30.178244v1

Author: Bryan Conklin

Anatomical projections between cortical areas are known to condition the set of observable functional activity in a neural network. The large-scale cortical monkey frontoparietal network (FPN) has been shown to support complex cognitive functions. However, the organization of anatomical connectivity between areas in the FPN supporting such behavior is unknown. To identify the connections in this network, over 40 tract-tracing studies were collated according to the Petrides & Pandya (2007) parcellation scheme, which provides a higher resolution map for the areas making up the FPN than other schemes. To understand how this structural profile can give rise to cognitive functions, a graph theoretic investigation was conducted in which the FPN’s degree distribution, structural motifs and small-worldness were analyzed. We present a new connectivity matrix detailing the anatomical connections between all frontal and parietal areas of the parcellation scheme. First, this matrix was found to have in and out-degree distributions that did not follow a power-law. Instead they were each best approximated by a Gaussian distribution, signifying that the connectivity of each area in the FPN is relatively similar and that it does not rely on hubs. Second, the dynamical relay motif, M9, was found to be overrepresented in the FPN. This 3-node motif is the optimal arrangement for near-zero and non-zero phase synchrony to propagate through the network. Finally, the FPN was found to utilize a small-world architecture. This allows for simultaneous integration and specialization of function. Important aspects of cognition such as attention and working memory have been shown to require both integration and specialization in order to function properly using near-zero and non-zero phase synchrony. Further, they benefit from the reliability afforded by the FPN’s homogenous connectivity profile which acts as a substrate resilient to targeted structural insult but vulnerable to a random attack. This suggests the diseases that impair cognitive function supported by the FPN may owe their effectiveness to a random attack strategy. These findings provide a candidate topological mechanism for the synchrony observed during complex cognitive functions in the M9 dynamical relay motif. The results also serve as a benchmark to be used in the network-level treatment of neurological disorders such as Alzheimer’s or Parkinson’s disease where the types of cognition the FPN supports are impaired. Finally, they can inform future neuromorphic circuit designs which aim to perform certain aspects of cognition.

References

1. Petrides, M. & Pandya, D. N. Efferent Association Pathways from the Rostral Prefrontal Cortex in the Macaque Monkey. J. Neurosci. 27, 11573–11586 (2007).

Speakers
avatar for Bryan Conklin

Bryan Conklin

Ph.D. Candidate, Center for Complex Systems & Brain Science, Florida Atlantic University
I am a Ph.D. candidate in Steven L Bressler's Cognitive Neurodynamics Lab. My research focuses on characterizing large scale cognitive brain networks in the monkey using time-frequency and graph theoretic analyses.



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 19

7:00pm CEST

P155: A model for unsupervised object categorization in infants
Zoom link

602 356 6034
https://us02web.zoom.us/j/6023566034?pwd=M08wb2xneXRoVUgvTWFQSmpiVG52dz09



Sunho Lee
, Youngjin Park, Se-Bum Paik

Both the brain and recent deep neural networks (DNNs) can successfully perform visual object recognition at similar levels. However, to acquire this function, DNNs generally require a large amount of training with a huge number of labeled data, whereas the brain does not appear to need such artificially labeled images to learn. Moreover, human infants, who certainly never experienced any training, are still able to classify unfamiliar object categories [1]. The mechanism by which the immature brain can categorize visual objects without any supervisory feedback remains elusive. Here, we suggest a biologically plausible circuit model that can correctly categorize natural images without any supervision. Instead of supervised signals, which are believed to be essential to train the system, we focused on the temporal continuity of the natural scene. Natural visual stimuli to which infants are exposed repeatedly have temporal continuity [2], unlike the dataset of images used to train artificial DNNs. In this regard, to detect the discontinuity in a natural scene that is potentially equivalent to the border of the image cluster of the same object, we designed a “differential unit” (Fig.1, DU). The DU estimates the difference between the current input and delayed input before seconds, and thereby can detect the temporal difference of visual input in real-time. In addition to the DU, to memorize the representation of visual objects, we also designed a “readout network” (Fig.1, k-Winners-Take-All network and readout), which is linked to the filtered pool5 units of randomized AlexNet. The randomized AlexNet corresponds to the early visual pathway of infants and functions as an image abstractor, where its weights are randomly initialized and fixed. The connection weights between the readout and pool5 units can be updated by Hebbian plasticity, but because the DU continuously inhibits the readout, the plasticity was blocked initially. However, when the temporal difference of response becomes below a certain threshold (which means that the same object was consistently detected), the DU stops the inhibition, and connections between the ensemble of pool5 units (highly activated for that object) and the readout are strengthened. During the test session, we can identify the category of the given test images by simply choosing the readout that shows the highest response. To validate the model performance, we made a sequence of images by sorting the CIFAR-10 dataset by categories, which mimics the temporal continuity of the natural scene. The model was trained by the designed image sequence, and tested by a separate validation set. As a result, we achieved 35% classification accuracy, which is significantly higher than the chance level of 10%. Based on the present findings, we suggest a biologically-plausible mechanism of object categorization with no supervision, and we believe that our model can explain how the visual function arises in the early stages of the brain without supervised learning.

Acknowledgments

This work was supported by National Research Foundation of Korea (No. 2019M3E5D2A01058328, 2019R1A2C4069863)

References

[1] D. H. Rakison and Y. Yermolayeva, “Infant categorization,” _Wiley Interdiscip. Rev. Cogn. Sci._ , vol. 1, no. 6, pp. 894–905, 2010.
[2] L. Wiskott and T. J. Sejnowski, “Slow feature analysis: Unsupervised learning of invariances,” _Neural Comput._ , vol. 14, no. 4, pp. 715–770, 2002.

Speakers
YP

Youngjin Park

Department of Bio and Brain Engineering, KAIST



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 09

7:00pm CEST

P158: Controllability for nonlinear dynamic brain system under biological constrains

Zoom ID: 838 7243 1036
Password: cns2020
https://us02web.zoom.us/j/83872431036?pwd=K280U0RXZHIyY1dURUJQS2QyeVJZZz09

Jiyoung KangHae-Jeong Park

Controllability of the brain system has been studied in the domain of network science. However, most of studies on the controllability of the brain have not considered the nonlinear nature of the brain. In the present study, we suggest a computational framework to control the brain system with a consideration of nonlinear brain dynamics. Our framework is based on a hypothesis that a brain with a disease has specific brain dynamics different from that of a normal brain and can be analyzed by using an energy landscape analysis. For both of normal and abnormal brain systems, multistable activation states (attractors) and transition rates were investigated by performing an energy landscape analysis based on a pairwise maximum entropy model. In the current virtual framework, we simulated how dynamics of a disease brain can be changed to that of the normal brain by external treatments under biological constrains. By doing this, we tried to find a strategy for optimal treatments that control the target brain to generate brain state dynamics similar to that of the healthy brain. We assumed that the target brain changes not only at a treated region or treated connectivity, but also it induces changes in the neighbors that the treated region interacts. By allowing changes in the neighborhood in response to the treatment to a target region, we showed an optimal controllability that takes into account of the nonlinear responses of the brain after treatment. We expect that this computational framework for controllability would help treatment planning for the nonlinear brain system, after empirical evaluation and validation.

Acknowledgments

This research was supported by Brain Research Program and the Korea Research Fellowship Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2017M3C7A1049051 and NRF-2017H1D3A1A01053094).

 

Speakers
avatar for Jiyoung Kang

Jiyoung Kang

Center for Systems and Translational Brain Sciences, Yonsei University



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 04

7:00pm CEST

P163: Biophysics and dynamics shape the cross-correlation properties of monosynaptic connections

Rodrigo Pena will be leading the discussion of this poster. 

Join us at: 

https://zoom.us/j/4935100819?pwd=Y0lIZ3FZVzk3SGwyeVlmZ1NJTUZ3UT09

Meeting ID: 493 510 0819

Password: 771843


Abstract:
Finely-timed spike relationships provide knowledge of putative monosynaptic connections in populations of neurons. Recent experiments involving hippocampal in vivo recordings were able to demonstrate such a relationship by means of the cross-correlation function (CCF) [1,2].  A sharp peak within a few milliseconds in the CCF indicates the presence of a connection. Yet, neurons that are not monosynaptically connected can emit spikes within some short temporal distance as a result of  network co-modulation [3], usually in the form of background noise. In general, there is an agreement that  CCFs are shaped by either the connectivity, synaptic properties, or background activity [4]. However, it remains unclear whether and how the postsynaptic intrinsic neuronal properties such as the ionic currents’ nonlinearities and time constants shape the CCFs between pre- and postsynaptic neurons. The presence of presynaptic-dependent postsynaptic signatures may serve to differentiate between correlation and causation.

We address these issues by combining biophysical modeling, numerical simulations and dynamical systems tools. We extend the framework developed in [5] to describe an ultra-precise monosynaptic connection by including ionic currents with representative dynamics. The model consists of two neurons receiving uncorrelated noise where the presynaptic neuron sends a fixed number of synaptic events to the postsynaptic neuron. CCF is computed as an average over a number of trials. We consider a number of scenarios corresponding to different levels of the ionic currents, their nonlinearities and effective time constants.

Our results show the emergence of an additional slower and wider temporal relationship, after the sharp peak in the CCF. This relationship depends on the dynamic properties present in the postsynaptic neuron model (ionic curretns) in the subthreshold regime. Upon a synaptic event, if the neuron is not on the verge of a spike, it will increase its voltage following some dynamics, which depends particularly on the effective time constant, and which will be reflected in the CCF. This temporal relationship may not be clearly observed in experiments due to a high signal-to-noise ratio and is not capturing external modulation effects. We explain this effect using a phase-plane description where we capture the spike-initiation nonlinearity in terms of nullclines and connect it to the CCF.

We expect that these results will help the identification of monosynaptic connections between different neuron types, in particular, those connections among neurons from different classes.

Funding Acknowledgment

This work was supported by the National Science Foundation grant DMS-1608077 (HGR). 

References

[1] English, D. F., McKenzie, S., Evans, T., Kim, K., Yoon, E., and Buzsáki, G. (2017). Pyramidal cell-interneuron circuit architecture and dynamics in hippocampal networks. Neuron96, 505-520.

[2] Constantinidis, C., and Goldman-Rakic, P.S. (2002). Correlated discharges among putative pyramidal neurons and interneurons in the primate prefrontal cortex. J. Neurophysiol. 88, 3487–3497.

[3] Yu, J., and Ferster, D. (2013). Functional coupling from simple to complex cells in the visually driven cortical circuit. J. Neurosci., 33, 18855-18866.

[4] Ostojic, S., Brunel, N., & Hakim, V. (2009). How connectivity, background activity, and synaptic properties shape the cross-correlation between spike trains. J. Neurosci, 29, 10234-10253.

[5] Platkiewicz, J., Saccomano, Z., McKenzie, S., English, D., and Amarasingham, A. (2019). Monosynaptic inference via finely-timed spikes. arXiv preprint arXiv:1909.08553.



Speakers
HR

Horacio Rotstein

Federated Department of Biological Sciences, NJIT / Rutgers University, New Jersey Institute of Technology



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 21

7:00pm CEST

P164: Using entropy to compute phase transitions of large networks of neurons
Wei Qin, Andre Peterson
Zoom info: https://unimelb.zoom.us/j/8312917769?pwd=RzJuWUNGNUg3YUI4T3p4bXFHaVdudz09   
Password: 826387 
Or join by phone:     Dial (Australia): +61 3 7018 2005 or +61 2 8015 6011    Dial (US): +1 669 900 6833 or +1 646 876 9923    Dial (Hong Kong, China): +852 5808 6088 or +852 5803 3730    Dial (UK): +44 203 481 5240 or +44 131 460 1196   
Meeting ID: 831 291 7769    
International numbers available: https://unimelb.zoom.us/u/adDa1aiD6C 

Phase transitions are often used to describe pathological brain state transitions observed in neurological diseases such as epilepsy. Typically, the study of the dynamics of neurons that are nonlinearly coupled and have complex network structures is done via large scale numerical simulations, which are mathematically intractable. Otherwise, analysis is performed where the network structure is averaged over and made spatially homogeneous. For a networked nonlinear dynamical system, phase transitions or bifurcations are computed via changes in the local stability around the fixed points. However, in such a system it is very difficult to compute the fixed points as the dimensionality of the system becomes large due to nested nonlinearities. We know from numerical simulations that the system becomes `chaotic`[1] as the order parameter (variance of the connectivity matrix) is increased and that microscopically this phase transition corresponds to an exponential increase in the number of fixed points[3]. This phase transition has also been computed for heterogeneous network structures such as Dale’s law[2]. However, it is very difficult to numerically verify these results. To quantify the change in network dynamics, we compute the entropy, a quantity which describes the number of states or information in a system. We show in this paper that the Network entropy (NE), a term derived from Shannon Entropy, can be used as a numerical indicator of a change in the number of equilibria. Hence, it is also a numerical method to estimate the change in stability of a network. It is developed via a Symbolic Dynamic approach based on probability distributions of the system state, which provides a measure of the number of states of the system.

** **

In this paper, a first order neural model with a time-constant and instantaneous synapses is networked. The network connectivities are described by a random matrix with mean and variance. Dale’s law can be integrated into the model by changing the connectivity matrix. We estimate the stability of a network via measuring the entropy of the network states using numerical simulations with different realisations of the connectivity matrix. The result demonstrates that the transition points from the analytical results for each case coincided with the measured NEs. It suggests the NEs can be used in numerical simulations to estimate the changes in the number of the fixed points, a.k.a. the phase transitions. This work provides a novel approach to estimate the network states and phase transitions via numerical simulations. Future works are needed to discover the mathematical relationship between the fixed points and entropy. Furthermore, it is interesting to use entropy to predict the dynamical behaviours of a system in an early stage. The discovery can be used to understand the brain state transitions and for the early diagnosis of neurological diseases, such as epilepsy.

References

1\. Stern M, Sompolinsky H, Abbott LF. Dynamics of random neural networks with bistable units. Phys.l Rev. E. 2014;90(6):062710.

2\. Ipsen JR, Peterson AD. Consequences of Dale's law on the stability- complexity relationship of random neural networks. 2019;arXiv:1907.07293.

3\. Wainrib G, Touboul J. Topological and dynamical complexity of random neural networks. Phys. rev. letters. 2013;110(11):118101.

Speakers
avatar for Wei Qin

Wei Qin

PhD candidate, Biomedical Engineering, The University of Melbourne
Third-year Ph.D. candidate at the University of Melbourne.


Poster pdf

Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 04

7:00pm CEST

P165: Using dynamical mean field theory to study synchronization in the brain.
Isabelle Harris, Anthony Burkitt, Hamish Meffin, Andre Peterson

Topic: P165: Using dynamical mean field theory to study synchronization in the brain.
Time: Jul 20, 2020 07:00 PM Amsterdam, Berlin, Rome, Stockholm, Vienna
Link: https://unimelb.zoom.us/j/94169410381?pwd=Z0FNQ0FYRHE5OWxmbGFEQTI3ZU1JQT09
Password: 162839

Using Dynamical Mean Field Theory (DMFT) to study state transitions in the brain

This work focuses on the dynamics of large networks of neurons, and particularly aims to study the effects of brain structures and functions on state transitions, such as those found in epilepsy.

Currently, the modelling framework in theoretical neuroscience focuses on using dynamical systems analysis of neural field models, and numerical simulations to disentangle the influences of structure and function on brain dynamics. Howver, these models and methods use continuous spatial averages of net- work connectivity. In this work, we are particularly interested in the spatial structures and functions that induce a state transition from a state of asynchronous, intrinsically fluctuating: a highly complex state (non-seizure state), to a state of intrinsic synchrony and hyper excitability: a simplistic state (seizure state). Using a first order neural network model with a discrete spatial field given by a coupling matrix, a set of self-consistent equations that describe the nature of the activity of network structures with populations of neurons, can be derived using dynamical mean field theory [Mastrogiuseppe and Ostojic, 2017]. This set of self-consistent equa- tions can be solved semi-analytically, and we use these solutions to derive a measure of spiking variability and hence, excitability: the coefficient of variation (CV). The CV is a common measure of spiking variability used in theoretical neuroscience [Meffin et al., 2004], but it has also been recently used as a measure of syncrhonisation in the analysis of animal model data [Fontenele et al., 2019]. We use the derived expression of CV to show that under certain network structure and function conditions there exists a state of asyncrhonous and intrinsic fluctuations, which is believed to be analagous to a normal resting brain state. Furthermore, the seizure state can be defined clinically as a much less complex state: a state of hyper-excitability and synchronisation of neural units. This work identifies possible key properties of the neural network that cause synchronous and hyper-excitable behaviour in the network, using dynamical systems, Random Matrix Theory, and DMFT.

Understanding how these network structure affect the nature of the neural dynamics is essential to advancing our current mathematical understanding of epileptic transitions, and here we have found a relationship between structure and dynamics. In particular, this work identifies network properties that cause normal brain function and properties that may cause seizure transitions. 

References

[Fontenele et al., 2019] Fontenele, A. J., de Vasconcelos, N. A., Feliciano, T., Aguiar, L. A., Soares- Cunha, C., Coimbra, B., Dalla Porta, L., Ribeiro, S., Rodrigues, A. J., Sousa, N., et al. (2019). Criticality between cortical states. Physical review letters, 122(20):208101.

[Mastrogiuseppe and Ostojic, 2017] Mastrogiuseppe, F. and Ostojic, S. (2017). Intrinsically-generated fluctuating activity in excitatory-inhibitory networks. PLoS computational biology, 13(4):e1005498.

[Meffin et al., 2004] Meffin, H., Burkitt, A. N., and Grayden, D. B. (2004). An analytical model for the ‘large, fluctuating synaptic conductance state’typical of neocortical neurons in vivo. Journal of computational neuroscience, 16(2):159–175.

Speakers
avatar for Isabelle Harris

Isabelle Harris

Student, Department of Medicine, The University of Melbourne



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 07

7:00pm CEST

P171: Neural routing: Determination of the fastest flows and fastest routes in brain networks
NEW!!: YouTube video with explanation: https://youtu.be/oLIrCDUWSV4            
Play at a 2x playback speed. It's hilarious and I promise you can still understand what I'm saying. At 1x speed is toooo slow :).

Zoom Meeting

Paula Sanz-Leon, Pierpaolo Sorrentino, Fabio Baselice, Rosaria Rucco, Leonardo L Gollo, James A Roberts

Background. Large-scale brain networks (Bullmore and Sporns, 2009) are characterized by global and local functional and structural metrics (Rubinov and Sporns, 2010; Zalesky et al., 2010) that have furthered our understanding of brain function (Fornito et al., 2015). These metrics are based on the idea that information in a network flows along the shortest paths, either topological (Fornito et al., 2013) or geometrical (Seguin et al., 2018). In this work, we propose two functional network connectivity measures based on the physical concept of flow (Townsend and Gong, 2018; Sanz-Leon et al. 2020), encompassing both geometrical and temporal aspects of neural activity. We term the first measure modal fastest flows (MFF), a time-averaged representation of the (fastest) flow lines revealing portions of physical space along which a particle (e.g., wave packet, information, spike) would travel at the maximal speed possible. The second measure, fastest neural routes, refers to a dense matrix where the weights are the average transit time a packet of information would take to travel from region ‘j’ to region ‘i’.

Methods: generation of fastest flow lines

We use our neural-flows toolbox (Roberts et al. 2019, Sanz-Leon et al. 2020) to derive flow fields from source-reconstructed MEG data. Fastest flow lines are then generated in 3 steps. First, we estimate flow vectors halfway between pairs of regions, transforming flow vectors into an edge property rather than a nodal property. Second, we trace a flow line starting from j, following the fastest flow to one of its nearest neighbours within a small spherical region. This process is done iteratively until reaching region i, and repeated for every possible region-pairwise combination. Flow lines are the sequences of maximal instantaneous speeds. Third, we average the values of each flow line to produce a matrix of fastest flows between pairs of regions.

Results: modal fastest flows and fastest neural routes.

We time-averaged the modal fastest flows (MFF), into a single matrix of conduction speeds. A comparison between functional connectivity derived from MEG timeseries and our MFF, indicates high similarity, quantified with the correlation matrix distance (cmd) (Herdin et al. 2005) -- 0 if matrices are equal, and 1 if completely different -- and in this case cmd=0.18. Paths highlighted by flow lines are not necessarily the shortest (in physical distance). Thus, we combine MFF with pairwise distance metrics to derive the fastest neural routes of information flow: the euclidean distance between pairs of regions, and the flow line lengths. Distributions of transit times are presented in Fig. 1. Our MFF matrix combined with the fibre length of structural connectome, can be used as a first approximation of heterogeneous time delays [s] in brain networks.

References

Bullmore and Sporns, 2009 Nat. Rev. Neurosc. 10(3), 186-198.
Fornito et al., 2015 Nat. Rev. Neurosc. 16 (3), 159-172
Fornito et al., 2013 Neuroimage 80, 426-444
Zalesky et al., 2010 Neuroimage 53 (4), 1197-1207
Roberts et al., 2019 Nat. Commun. 5;10(1):1056.
Rubinov and Sporns, 2010 Neuroimage, 52(3), 1059-1069
Townsend and Gong., 2018 PLoS Comput. Biol. 2018;14(12):e1006643.
Sanz-Leon et al. 2020 Neuroimage toolbox \- in preparation
Seguin et al., 2018 PNAS 115 (24), 6297-6302

Speakers
avatar for Paula Sanz-Leon

Paula Sanz-Leon

Senior Research Officer, QIMR Berghofer




Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 01

7:00pm CEST

P172: Spatiotemporal brain waves on resting-state MEG data
James Pang, Paula Sanz-Leon, Jonathan Hadida, Leonardo Gollo, Mark Woolrich, James A Roberts

TO GET THE GIST OF OUR WORK IN 2 MINUTES:
Please click the slideshow file below (CNS2020_slideshow.m4v)

DURING THE POSTER SESSION OR ANY OTHER TIME:
Please submit your questions/comments at bit.ly/CNS2020_P172_form. I will respond to them asap.

SUMMARY:
Human brain function relies on the integration and coordination of neuronal activity on multiple scales. Several works have revealed that this is possible through spontaneous or evoked synchronization of activities of neural circuits in the brain, allowing spatially correlated patterns that propagate in time to emerge, known as brain waves [1]. These brain waves have been observed in empirical macroscopic and mesoscopic measurements [2,3] and computational brain network models [4], and have been shown to support various brain functions such as visual perception [5]. However, brain waves are rarely investigated in resting-state experimental settings (i.e., without performing an explicit task).

Here, we investigate large-scale spatiotemporal brain waves in resting-state human magnetoencephalography (MEG), which is becoming a popular imaging modality due to its high spatial and temporal resolution, enabling more accurate analysis of macroscopic brain waves. We use source reconstructed single-subject MEG data projected onto the cortical surface and then decompose the signal into various typical frequency bands from delta to gamma. We find that organized patterns of waves traveling in space and time exist in the resting-state data at the different frequency bands; an example of which is shown in the time snapshots of the alpha-filtered MEG signal in Fig. 1A and the corresponding phase maps in Fig. 1B. Using the methods in [4] for estimating instantaneous phase speeds, we find that, in general, waves with higher temporal frequencies tend to propagate more rapidly (Fig. 1C). In addition, the speeds match those in the literature using other modalities (e.g., electrocorticography in [2]), suggesting the reliability of our analyses. In summary, our work shows that macroscopic brain waves can be observed in resting-state MEG data even for a single subject, enabling the use of MEG alongside computational models in future investigations on how brain waves affect and relate to large-scale brain networks and the emergence of cognition and behavior.

[1] Muller et al. Cortical travelling waves: Mechanisms and computational principles. Nature Reviews Neuroscience19(5):255-268, 2018.
[2] Zhang et al. Theta and alpha oscillations are traveling waves in the human neocortex. Neuron 98:1269-1281, 2018.
[3] Rubino et al. Propagating waves mediate information transfer in the motor cortex. Nature Neuroscience 9:1549-1557, 2006.
[4] Roberts et al. Metastable brain waves. Nature Communications 10:1056, 2019.
[5] Zanos et al. A sensorimotor role for traveling waves in primate visual cortex 85:615-627, 2015.


Speakers
avatar for James Pang

James Pang

QIMR Berghofer Medical Research Institute
Postdoc working on neuroimaging, networks, and computational models



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 08

7:00pm CEST

P173: How energy constraints shape brain dynamics during hypoxia and epileptic seizures
Shrey Dutta, James A Roberts

Google Meet link: https://meet.google.com/xxc-ifyr-fai
The brain consumes 20% of the body's energy, 10 times more than predicted by its mass, which makes it highly sensitive to metabolic disturbances [1]. Asphyxia and epileptic seizures disrupt energy and oxygen availability in the brain, leading to pathological activity in the electroencephalogram (EEG) [2-4]. Modelling the bidirectional relationship between brain activity and energy resources is crucial to understand brain disorders where metabolic disturbances are implicated. Most models of brain activity do not explicitly include metabolic variables and so are unable to address dynamical constraints on energy resources. Here, we explore the roles of energy demand and energy supply in Hodgkin-Huxley neurons augmented with the energy resource dynamics of Na+/K+ pumps [4]. Using a small-scale network of excitatory and inhibitory neurons, we show that during high energy demand and low energy supply (extreme hypoxia) the model simulates scale-free burst suppression with asymmetric longer-duration bursts (Figure 1)—similar to empirical EEG from infants recovering from hypoxia. During normal energy demand and low-to-moderate energy supply the model generates several types of epileptic seizures (Figure 1). We also show multiple mechanisms for seizure terminations depending on the magnitude of hypoxia. Seizure termination during low energy supply is due to depletion of local energy resources, while during moderate energy supply ion (Na+ & K+) imbalances terminate the seizure. This suggests that seizure termination due to lack of energy is a potential mechanism for postictal generalised EEG suppression. Our results unify burst suppression during hypoxia and epileptic seizures, and our modelling provides a general platform to study brain pathologies linked with metabolic disturbances.

References

[1] Raichle, M. E. (2006). The brain's dark energy. Science, 314(5803):1249–1250

[2] Roberts, J. A., Iyer, K. K., Finnigan, S., Vanhatalo, S., and Breakspear, M. (2014). Scale-free bursting in human cortex following hypoxia at birth. The Journal of neuroscience, 34(19):6557–6572

[3] Jirsa, V. K., Stacey, W. C., Quilichini, P. P., Ivanov, A. I., and Bernard, C. (2014). On the nature of seizure dynamics. Brain, 137(8):2210–2230

[4] Wei, Y., Ullah, G., Ingram, J., and Schiff, S. J. (2014). Oxygen and seizure dynamics: II.Computational modeling. Journal of neurophysiology, 112(2):213–223

Speakers
SD

Shrey Dutta

Student, Faculty of Medicine, QIMR Berghofer Medical Research Institute, University of Queensland



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 08

7:00pm CEST

P17: The Impacts of the Connectome on Coupled Networks of Wilson-Cowan Models with Homeostatic Plasticity
Wilten Nicola, Sue Ann Campbell

We study large networks of Wilson-Cowan neural field systems with homeostatic plasticity. These networks have been known to display rich dynamical states such consisting of a single recurrently coupled, or two cross-coupled nodes [1]. These dynamics include chaos, mixed mode oscillations and chaos, and synchronized chaos, even under these simple connectivity profiles in small networks. Here, we consider these networks with connectomes that display so- called L1 normalization, but are otherwise arbitrary under large network limits. We find that for the majority of classical connectomes considered (Random, Small World), the network displays a large-scale chaotic synchronization to the attractor states and bifurcation sequence of a single recurrently coupled node as in [1]. However, connectomes that display sufficiently large pairs of eigenvalues can trigger multiple Hopf bifurcations which can potentially collide in Torus bifurcations that can destabilize the synchronized, single node attractor solutions. Our analysis demonstrates that for Wilson-Cowan systems with Homeostatic plasticity, the dominant determinant of network activity is not the connectome directly, but rather the connectome's ability to generate large eigenvalues that can induce multiple nearby Hopf bifurcations. If the connectome cannot generate these large pairs of eigenvalues, the dynamics of the network considered become limited to the dynamics of a single recurrently coupled node.

[1] Nicola, W., Hellyer, P. J., Campbell, S. A., & Clopath, C. (2018). Chaos in homeostatically regulated neural systems. _Chaos: An Interdisciplinary Journal of Nonlinear Science_ , _28_ (8), 083104.

Speakers
avatar for Wilten Nicola

Wilten Nicola

Professor, University of Calgary


Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 20

7:00pm CEST

P189: Computationally Going Where Experiments Cannot: A Dynamical Assessment of Dendritic Currents in the Behaving Animal [1]
Alexandre Guet McCreight, Frances Skinner

Google Meet Link: https://meet.google.com/ofz-cbmq-nwh

Though the electrophysiology techniques that we use to probe neuronal function have made large advancements, neuronal function remains shrouded in mystery. Little is known about the current contributions that govern cell excitability across different neuronal subtypes and their dendritic compartments in vivo. The picture that we do have is largely based on somatic recordings performed in vitro.

Uncovering dendritic current contributions in neuron subtypes that represent a minority of the neuronal population is not currently a feasible task using purely experimental means. Thus, we employ morphologically-detailed multi-compartment models, and specifically, we use two models of a specific type of inhibitory interneuron, the oriens lacunosum moleculare (OLM) cell. The OLM cell is a well-studied cell type in CA1 hippocampus that is important in gating sensory and contextual information.

We use these models to assess the current contribution profile across the different somatic and dendritic compartments of the models in the presence of levels of synaptic bombardment that would occur in vivo and compare them to corresponding in vitro scenarios with somatic current injections that generate the same spike rates. Using this approach, we identify changes in dendritic excitability, current contributions, and current co-activation patterns.

We find that during in vivo-like scenarios the relative timing between different channel current activation patterns and voltage are preserved. On the other hand, when compared across morphological compartments, current and voltage signals were more decorrelated during in vivo-like scenarios, suggesting decreased signal propagation. We also observe that changes do occur during in vivo-like scenarios on the level of relative current contribution profiles. More specifically, in addition to shifts in the relative balances of currents that are most active during spikes, we report robust enhancements in dendritic hyperpolarization-activated cyclic nucleotide-gated channel (HCN, or h-current) activation during in vivo-like contexts. This suggests that dendritically-located h-channels are functionally important in altering signal propagation in the behaving animal.

[1] Guet-McCreight A and Skinner FK. [version 2; peer review: 2 approved]. F1000Research 2020, 9:180 (https://doi.org/10.12688/f1000research.22584.2)

Speakers
avatar for Alexandre Guet McCreight

Alexandre Guet McCreight

Postdoctoral Research Fellow, Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 13

7:00pm CEST

P1: Homeostatic recognition circuits emulating network-wide bursting and surprise
Meeting link:
https://us04web.zoom.us/j/2606501888?pwd=cU13ZlEwWE1JaGtZVGUrV0hJcFE0Zz09

Tsvi Achler

Understanding the circuits of recognition is essential to build a deeper understanding of virtually all of the brains behaviors and circuits.

The goal of this work is to capture simultaneous findings on both the neural and behavioral levels, namely Network Wide Bursting (NWB) dynamics with surprise (unexpected inputs), using a hypothesized recognition circuit based on the idea of homeostasis flow.

If real neural brains at a resting state are presented with an unexpected or new stimulus, the brain network shows a fast network-wide increase in activation (NWB of many neurons) followed by a slower inhibition, until the network settles again to a resting state. Bursting phenomena during recognition is found ubiquitously in virtually every type of organism, within isolated brain dissections and even neural tissue grown in a dish (Fig. 1). Its source and function remain poorly understood. Behavioral manifestation of surprise can be observed if the input is much unexpected and may involve multiple brain regions.

The homeostatic flow model posits that activation from inputs is balanced with top down pre-synaptic regulatory feedback from output neurons. Information is projected from inputs to outputs with forward connections then back to inputs with backwards homeostatic connections which inhibits the inputs. This effectively acts to balance the inputs & outputs (homeostasis) and generates an internal error-dependent input. This homeostatic input is then projected again to outputs and back again until output values relate recognition. This occurs during recognition and no weights are learned.

When a surprise or unexpected input stimulus is presented, NWB occurs because the homeostatic balance is disturbed with the new stimulus. The system subsequently calms down as it settles back to a new homeostasis.

In comparing to existing models, this circuit is different from Adaptive Resonance Theory because: 1) no lateral connections are required (inhibitory or otherwise) 2) all neurons feed backwards pre-synaptically at the same time 3) there is no vigilance parameter. It is different from Hopfield networks because instead of top-down feedback being positive, it is negative (inhibitory & homeostatic). This changes the functions and dynamics of the model making it stable: its dynamics eventually converge to steady state as long as inputs do not change.

The homeostatic feedback should not be confused with error of learning algorithms since: 1) it is implemented during recognition 2) does not adjust any weights at any time 3) not generated using training data. It is different from generative and predictive coding models because 1) it is primarily used during recognition not learning 2) the generative and recognition components are inseparable and contained within a single integrated homeostatic circuit.

The network is connectionist but approximates a Bayesian network by: 1) homeostatic weights are roughly equivalent to Bayesian likelihood values 2) output values can behave as Bayesian priors if they are maintained externally or if inputs suddenly change. Maintaining priors changes circuit recognition and dynamics without changing weights.

Learning can achieved with simple Hebbian learning, obtaining weights that are similar to Bayesian likelyhood. Both directions of the homeostatic process learn the same weights. Single layer learning is demonstrated with standard MNIST digits while capturing the neural findings of NWB.

https://youtu.be/9gTJorBeLi8

Speakers
avatar for Tsvi Achler

Tsvi Achler

Chief Science Officer, Optimizing Mind


Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 20

7:00pm CEST

P201: Frequency-separated Principal Components Analysis of Cortical Population Activity
Jean-Philippe Thivierge

Google Meet link: meet.google.com/naa-bidm-gue 

Neocortical activity is characterized by the presence of low-dimensional fluctuations in firing rate that are coordinated across neurons [1]. Despite a wealth of experiments and models, the role of low-dimensional fluctuations remains unclear, in part due to limited data analysis techniques. While several approaches exist to perform dimensionality reduction [2], there is a lack of methods designed to extract frequency-specific, low-dimensional fluctuations from neural signals. This is true even with methods aimed at finding rotational structure in PCA [3], as these approaches suffer from a lack of frequency‐specific separation of components.

Here, we describe a technique termed frequency-separated principal components analysis (FS-PCA) that addresses this issue. This talk is organized as a tutorial where we first show toy examples that apply FS-PCA to artificial signals. Then, we provide an application of FS-PCA to both spontaneous and evoked cortical activity. Finally, we discuss the interpretation, limitations, and possible extensions of this technique to problems in systems neuroscience.

FS-PCA is based on recent theoretical advances on the eigenspectrum of Hankel matrices [4]. As a first example, we consider a sine wave with added zero-mean Gaussian noise (Fig.1a). We show that this signal can be converted to a Hankel matrix (Fig.1b) whose eigenspectrum contains 2 _f_ +1 largest eigenvalues, where _f_ is the number of characteristic frequencies of the original signal. The reconstructed signal obtained from FS-PCA closely matches the amplitude, phase, and frequency of the original signal (Fig.1c).

Next, we apply FS-PCA to population recordings from macaque V1 cortex. We show that the first dimension of the reconstructed signal captures the slow, low- frequency fluctuations in mean population activity observed over time (Fig.1d, red line). Adding further dimensions markedly improves the reconstruction of population activity (Fig.1d, blue line). Overall, ranked eigenvalues obtained from FS-PCA followed an approximate power-law where the highest ranked dimensions captured a large proportion of the data (Fig.1e). In turn, highest- ranked dimensions had a lower characteristic frequency than lower dimensions (Fig.1e, inset). In sum, these results suggest that while a broad spectrum of frequencies contributed to population activity, fluctuations in spontaneous activity were dominated by low-frequency components.

Speakers
avatar for Jean-Philippe Thivierge

Jean-Philippe Thivierge

Department of Psychology, University of Ottawa



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 14

7:00pm CEST

P211: Neural Networks Architectures That Detect Visual Motion like Biological Brains
Hamish Pratt, Bernard Evans, Thomas Rowntree, Ian Reid, Steven Wiederman

Convolutional neural networks (CNNs) have become the-state-of-the-art for image classification and object detection tasks, as they have the ability to combine appearance features in a scene. CNNs used for detection and classification tasks primarily process single static images to combine the features. In a manner similar to biological brains, some neural networks also utilise motion as complementary information to aid object detection tasks. However, unlike the brain, these networks rarely classify ‘moving objects’ in a scene. Our research analyses a neural network’s ability to detect unique motion cues in scenes without any appearance, to understand the limits for neural networks to process motion information. We generated variant CNN models to understand different architectures that can process motion information and built a recurrent CNN with information skip layers for our experiments. By comparing our network’s detection rates against psychophysical stimuli used in human experiments, we found the neural network and humans both struggled to correctly detect unique motion in similar conditions. When trained for detecting higher orders of motion, stimuli observable by even small insects, the network responded strongly to the order of motion for which it was trained against, and was, for the majority, unresponsive to the other motion orders. To further test the ability of motion detection in neural networks, we trained a neural network against detecting repeating spatio-temporal signals inside a scene of random noise. The results from our experiments show that alongside convolutional neural networks' success in detecting appearance features for object classification, they are able to detect motion without appearance. With the understanding of similarities to biological brains and limitations in which these neural networks perform fundamental vision tasks like motion detection, we will have a better understanding of a network’s suitability for real-world applications.
Presentation Link: meet.google.com/mob-nahz-rpa

Speakers
avatar for Hamish Pratt

Hamish Pratt

School of Computer Science, The University of Adelaide



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 05

7:00pm CEST

P214: Bayesian Model for Multisensory Integration and Segregation
Ma Xiangyu, He Wang, Min Yan, Wenhao Zhang, K. Y. Michael Wong

Bayesian Model for Multisensory Integration and Segregation (Link to Google Meeting)

Xiangyu Ma1, He Wang1, Min Yan1, Wen-Hao Zhang2, and K. Y. Michael Wong1

1Hong Kong University of Science and Technology

2University of Pittsburgh

The brain processes information from different sensory modalities in our daily routine, and the neural system should have the ability to distinguish whether different signals originate from the same source. Experimental data suggested that the brain can integrate visual and vestibular cues to infer heading- direction according to Bayesian prediction. In the dorsal medial superior temporal (MSTd) area and the ventral intraparietal (VIP) area, there exist two types of neurons, congruent and opposite neurons. By focusing on a prior distribution of stimuli that is fully correlated, a recent work by Zhang et al. (Zhang, 2019) suggested that those two distinct types of neurons have complementary roles in multisensory integration and segregation. In the proposed distributed network architecture, cues of different modalities are processed by different modules, but the modules are reciprocally connected. Congruent neurons of given preferred stimuli in one module are connected to the congruent neurons in the other module with similar preferred stimuli. In contrast, opposite neurons of given preferred stimuli in the two modules are connected to their counterparts with opposite preferred stimuli. This facilitates the congruent neurons to yield Bayesian posterior estimates of multisensory integration in a broad range of parameters, and the opposite neurons to provide signals dependent on cue disparity, enabling the segregation of cues in subsequent processing. However, in the previous model, there are parameter ranges that the inference can only be approximately Bayesian. Hence, in this work, we will approach the dynamics analytically and propose improvements for achieving more accurate Bayesian inference.

Furthermore, the Bayes-optimality in the previous work was based on a prior distribution of stimuli that is fully correlated, whereas in practice, there are many other scenarios described by priors with more than one components. For example, studies in causal inference consider prior distributions with a correlated and an independent component. In the second part of our work, we propose a neural circuit with additional modules to tackle these cases. In addition, we further illustrate that the network encodes strong evidence for the correlations between the prior information and the network structure. Finally, we discuss how the Bayes factor reveals the potential of our network model as a decision making neural circuit for causal inference.

Reference

Zhang, W.-H. a. (2019, May). Complementary congruent and opposite neurons achieve concurrent multisensory integration and segregation. eLife.


Speakers
MX

Ma Xiangyu

Institute for Physics, Hong Kong University of Science and Technology


CNS pdf

Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 11

7:00pm CEST

P215: Loss aversion and outcome-value encoding: A negative association between posterior insula activity and loss aversion coefficient
Link to Google Meeting: https://meet.google.com/ona-iwxv-ugf

Update 19Jul2020: A different version of poster is uploaded for better readability. 

Ka Chun Wu
, Isaac Ip, Fiona Ching, Heytou Chiu, Rosa Chan, Savio Wong

In prospect theory, loss aversion is one important parameter that modulates one's decision in involving risk. Previous studies find that amygdala activity is related to the degree of loss aversion during the action-selection processes. However, there is little if any study that look into the reaction to outcome that could affect the reinforcement learning process. In this study, we examine the brain response associated with decision outcome and how that varies across subjects with different degrees of loss aversion. We expect that people with high loss aversion experience stronger emotional impact when receiving a negative outcome after taking risk. We hypothesize a person’s degree of loss aversion could be reflected by the BOLD contrast across decision outcomes.

To test this hypothesis, we recorded and analysed the fMRI data of twenty-three participants (10 males and 13 females; Mage = 17.78 ± 0.52) during the Loss Aversion Task (LAT) [1]. The LAT was implemented with a rapid event-related design in which participants were given two options: NoGamble option with a guaranteed outcome and Gamble with 50% chance of getting a better-than-NoGamble outcome and 50% chance of getting a worse-than-NoGamble outcome (Fig. 1A). The utility of the two options varied so that one option has higher or equal utility respect to another. Participants were presented with a feedback indicating the outcome. Loss aversion coefficient (lambda; ƛ= −beta loss / beta gain) is estimated by fitting the behavioural responses to the logistic function. A higher lambda value indicates stronger loss aversion, with ƛ = 1 meaning equal weight for gain and loss.

We find significantly stronger activation at nucleus accumbens (NAcc) for gambling outcome (win or loss) than guaranteed outcome, but find no significant difference among the valence of feedback (gain verse loss). Condition contrasts reveal that the activity of the left posterior insular cortex [2], an area found to be related to emotional salience and memory, during gambling outcome relative to guaranteed outcome is negatively correlated with participants’ lambda (Fig. 1B). Although it is unclear about the causal relationship, we propose that gambling outcome is more salient in those subject with weaker loss aversion, which reflect a higher chance of it influencing their future decision [3]. In contrast, people with strong loss aversion are weaker in differentiating gambling outcome and guaranteed outcome in terms of their salience, and thus lead to relatively higher utility for the safe option in a long run.  In conclusion, the individual difference in loss aversion could be capture by condition contrasts in a LAT and gives insight to the model of outcome-value encoding.

References
1. Tom SM, Fox CR, Trepel C, Poldrack RA. The neural basis of loss aversion in decision-making under risk. Science. 2007; 315(5811): 515-518.
2. Menon V, Uddin LQ. Saliency, switching, attention and control: a network model of insula function. Brain Structure and Function, 2010, 214; 655-667.
3. Canessa N, et al. Neural markers of loss aversion in resting-state brain activity. NeuroImage. 2017;146: 257-265.


Speakers
KC

Ka Chun Wu

MPhil student, Department of Educational Psychology, The Chinese University of Hong Kong



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 02

7:00pm CEST

P216: Multiplicity and correlations of unidirectional and reciprocal connections in the nervous system of the Caenorhabditis elegans
Link to online meeting: meet.google.com/ucc-hrmd-qwz
Edgar Wright, Alexander Goltsev 

Reciprocally connected pairs (RCPs) of neurons are the simplest structural motif in neuronal networks. More complex structural motifs are composed of three or more neurons. RCPs are formed by reciprocal synapses, and represent local microcircuits that can act as feedback loops. Evidence of the ubiquitous presence of RCPs in the central nervous system of different animals is well-established. Statistical analysis of connections between principal cortical cells has shown that RCPs are overrepresented in the somatosensory cortex, neocortex, and olfactory bulb. RCPs are also overrepresented in the neuronal network of the nematode Caenorhabditis elegans (C. elegans) [1].

In this work we analysed the statistics of reciprocal and undirectional chemical connections between pairs of neurons in the neuronal connectomes of the male and hermaphrodite C. elegans, using data recently published in [2]. First, our analysis shows that even if all unidirectional connections are removed, i.e. if approximately 63% of all connections are removed, approximately 83% of neurons with chemical synapses in the male (87%) in the hermaphrodite) remain in the strongly connected cluster, where they are reachable from each other through sequences of reciprocal connections. This result shows that reciprocal connections provide communication between most neurons with chemical synapses in the C. elegans. Second, average multiplicity was found to be larger among reciprocal connections than unidirectional connections, both among afferent and efferent connections. The probability that a connection has large multiplicity (over 10 synapses per connection) is larger among reciprocal connections. Third, it was found that most neurons with an above-average number of presynaptic neighbors have a number of afferent synapses which is on average larger than the average connectome multiplicity. Moreover, the larger the in-degree of a neuron the larger the multiplicity of the afferent connections to this neuron (Figure 1). The number of efferent connections, however, was found to be largely independent of the number of postsynaptic neurons.  Fourth, the number of afferent synapses and the number of presynaptic neurons are strongly correlated, such that neurons with more presynaptic neighbors receive disproportionally more synapses.

Given the known functional roles of some RCPs, it is possible that enhanced multiplicity among RCPs is the result of their function. For example, RCPs have been implicated in memory formation. Since the formation of long-term memory results in an increase in the number of dendritic spines on neurons that are part of a memory engram, it is possible that a similar mechanism plays a role in the enhanced multiplicity of reciprocal connections in the C. elegans. The enhanced multiplicity may in part result from Hebbian structural plasticity. As neurons with a larger number of presynaptic neighbors are more likely to be activated, they are also more likely to experience prolonged periods of high activity, which in turn can induce the formation of more synapses. Conversely, the multiplicity of neurons with less presynaptic neighbors should decrease as the result of increased periods of low neuronal activity.

1. L. R Varshney et al, Structural properties of C. elegance neural network. PLoS comput. biol. 7:e1001066, 2011.

2. S.J. Cook et al, Whole-Animal Connectomes of Both Caenorhabditis Elegans Sexes. Nature, 571, 63–71, 2019.


Speakers
avatar for Edgar Wright

Edgar Wright

PhD Student, Department of Physics, University of Aveiro


Poster pdf

Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 16

7:00pm CEST

P219: Interpretable modeling of neurons in cortical area V4 via compressed convolutional neural networks
Zoom Link: https://ucsf.zoom.us/j/98471222466?pwd=UllvdG5USmYwdkE2VUJXcnBkTGl0dz09

Reza Abbasi-Asl
, Bin Yu

Characterizing the functions of neurons in visual cortex is a central problem in visual sensory processing. Along the ventral visual pathway, functions of the neurons in the cortical area V4 are less understood compared to early visual areas V1 and V2. This is primarily because of V4 neurons’ highly nonlinear response properties. As a consequence, building predictive models for these neurons has been one of the challenging tasks in computational neuroscience. Recently, models based on convolutional neural networks (CNNs) have shown promise in predicting the activity of V4 neurons. More importantly, interpreting CNN-based models has offered tools to understand V4 neurons’ functional properties through visualizing their pattern selectivity. These interpretations, however, are based on models with hundreds of convolutional filters. Therefore, it is challenging to present a sparse set of filter bases to model each V4 neuron. To address this limitation, we propose two algorithms to remove redundant filters in the CNN-based models of V4 neurons. First, CAR compression that prunes filters from the CNN based on the filter’s contribution to the image classification accuracy. CAR is a greedy compression scheme to obtain smaller and more interpretable CNNs, while achieving close to original accuracy. Second, RAR compression that prunes filters based on their contribution to the neural response prediction accuracy. Both CAR and RAR provide a new set of simpler accurate models for V4 neurons. These models achieve almost si