Loading…
CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Workshop [clear filter]
Tuesday, July 21
 

9:30am CEST

W2 S01: Elucidating the role of cell architecture and its remodeling in maintaining a healthy heartbeat using high-resolution 3D computational models.
When an aspiring athlete builds cardiac endurance when a mom-to-be is pregnant and as we all grow and age from tiny embryos into adults, the heart has to generate enough force to pump the blood that the body needs to supply oxygen and nutrients to sustain life. The heart’s ability to pump blood to the rest of the body is determined by the coordinated contraction of millions of its constituent cardiac cells. Soon after completion of embryonic development and formation of the heart, cardiac cells lose their ability to divide and multiply. Therefore, the only way that the heart can increase its force of contraction and capacity to meet long-term increases in demand for blood supply through most of life is by increasing the size and re-arranging the internal components of each cardiac cell to accommodate more force-generating proteins within them. These architectural changes at the cellular level are referred to as cell remodeling and little is known about the biophysical mechanisms that drive this important process of life and how changes to cell architecture impact on cardiac cell signaling and the heartbeat. In this talk, I will present our recent findings on the role that spatial organization of cardiac cell signaling proteins and organelles has on in maintaining a healthy heartbeat. I will demonstrate how we can gain quantitative insights on the contribution of the sub-cellular organization to cell function using 3D computational models of the cell’s subcellular architecture and the signaling processes that are derived from microscopy data. I will also briefly outline our vision for making spatially detailed models useful for drug-discovery applications.

Speakers

Tuesday July 21, 2020 9:30am - 10:00am CEST
Crowdcast (W02)

10:00am CEST

W2 S02: Inositol triphosphate receptors can increase calcium spark activity in cardiomyocyte dyads without altering signal shape
Calcium signals perform integral roles in cardiac cells, coordinating each heartbeat, and regulating the biochemical reactions that control growth. Inositol 1,4,5-triphosphate receptors (IP3Rs) are intracellular calcium channels that are known to influence these processes during cellular hypertrophy. Recent protein localization experiments suggest IP3Rs may exist in close proximity to ryanodine receptors (RyRs), the channels primarily responsible for the flood of calcium from intracellular stores (sparks) during calcium-induced calcium release. In this study, we seek to untangle the contribution of IP3Rs to spark formation. We develop mathematical models incorporating the stochastic behavior of opening receptors that allow for the parametric tuning of the system to reveal the impact of IP3Rs on spark activation. By testing multiple spark initiation mechanisms, we find that consistently opening (“leaky”) IP3Rs can result in spark initiation more reliably than intermittently opening IP3Rs. We also find that while increasing numbers of IP3Rs increase the probability of formation of a spark, they have little impact on its resultant amplitude, duration, or overall shape.

Speakers

Tuesday July 21, 2020 10:00am - 10:30am CEST
Crowdcast (W02)

10:30am CEST

11:00am CEST

W2 S04: Patterns of astrocytic Ca2+ activity: from imaging to modeling and back.
Intracellular calcium is a convenient measurable indicator of astroglial signaling and active involvement in information processing and regulatory pathways in the CNS. Many laboratories are rapidly accumulating astrocyte calcium imaging data in different modalities, with a global trend towards behaving animal experiments. This creates a demand for astrocyte-oriented data processing and analysis frameworks. Current best performing algorithms for analysis of calcium imaging data are neuron-oriented and rely on stationary, stable, separable spatial sources prior. Astrocytes are less predictable with regards to spatial or temporal characteristics of their calcium activity, displaying patterns from spatially confined microdomains to spreading events to large-scale waves. I will present our in-the-making approach to denoising and description of astrocytic calcium imaging data, addressing event-oriented, continuous, and network-level features in ex vivo and in vivo settings. Further insights into physical principles and molecular mechanisms underlying astroglial calcium dynamics may come from mathematical modeling. I will describe our spatially extended modeling framework, which can be employed to this end, and will present patterns of calcium dynamics simulated with this model set. 

Speakers
avatar for Alexey Brazhe

Alexey Brazhe

senior researcher, Lomonosov Moscow State University


Tuesday July 21, 2020 11:00am - 11:30am CEST
Crowdcast (W02)

12:00pm CEST

W2 S05: Exploring the origin of spatiotemporal patterns of glial calcium by a compartmental model of astrocytic physiology
NOTE: This talk have moved to July 22nd, 4:30pm an will be delivered by Dr. Evan Cresswell-Clay (NIH/BCAM)

Speakers
avatar for Maurizio De Pitta

Maurizio De Pitta

Research Fellow, Basque Center for Applied Mathematics
I am part of the Group in Mathematical, Computational, and Experimental Neuroscience at the Basque Center for Applied Mathematics in Bilbao (Spain). My expertise is in the study of neuron-glial interactions in the healthy and diseased brain. I use multi-disciplinary approaches at the cross-roads of Physics and Computer Science, and collaborate with biologists, engineers, and medical doctors, to harness the... Read More →


Tuesday July 21, 2020 12:00pm - 12:30pm CEST
Crowdcast (W02)

12:30pm CEST

W2 S06: Cloud-based parallel computing of cellular dynamics in a realistic model of astroglia
Modeling the dynamics of cellular processes in astrocytes with their realistic, highly complex geometry has been a challenge. We have developed a NEURON-Python-based modeling platform ASTRO that enables parallel processing of cellular signaling within a recreated astrocyte shape using the Amazon Web Service. ASTRO offers a versatile exploration of cell membrane physiology and spatiotemporal calcium dynamics, including multicomponent diffusion-reaction, against multi-disciplinary experimental data. The platform enables users to parallelize the process of calculating intracellular calcium dynamics depending on the availability of computing nodes, virtual machine memory, and cloud service elasticity. Our preliminary results indicate that having built-in tuning options is likely to be the most efficient way to tackle the configuration of virtual machines aiming at the fastest and budget-efficient computations.

Speakers
avatar for Leonid Savtchenko

Leonid Savtchenko

Seniour Research Scientist, University College London


Tuesday July 21, 2020 12:30pm - 1:00pm CEST
Crowdcast (W02)

2:00pm CEST

W05 S00: Opening Remarks On Spatiotemporal Dynamics in Neuroimaging: Models and Analysis
About this workshop
Human neuroimaging with functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and magnetoencephalography (MEG) reveals large-scale patterns of activity reflecting the brain’s functional complexity. However, the bulk of work seeking to analyse their dynamics and structure has traditionally focused either on temporal aspects, or on spatial aspects, but rarely the two combined. There is growing awareness in the neuroimaging community that computational neuroscience offers unifying approaches to understand large-scale brain dynamics. Conversely, neuroimaging data is a rich source of inspiration for the modeling community.

This workshop aims to showcase recent advances at the intersection of computational neuroscience and human neuroimaging, with a focus on large-scale spatiotemporal dynamics. Two themes we will cover are:
1. Recent work modeling the neurobiology underpinning neuroimaging data; and,
2. Applications and data, highlighting challenges where computational neuroscience has much to offer.

Schedule (in UTC - check attached international timetable to see local times):

  • S01*UTC 12:00: Paula Sanz-Leon (https://cns2020online.sched.com/event/czSm)
  • S02*UTC 12:30: Changsong Zhou (https://cns2020online.sched.com/event/czTE)
  • S03*UTC 13:00: Yujiang Wang (https://cns2020online.sched.com/event/czTP)
  • S04*UTC 13:30: Patricio Orio (https://cns2020online.sched.com/event/czTc)
  • S05*UTC 14:00: Katharina Glomb (https://cns2020online.sched.com/event/czTu)
  • S06*UTC 14:30: John Griffiths (https://cns2020online.sched.com/event/czU0)
  • S07*UTC 15:00: Johan van der Meer (https://cns2020online.sched.com/event/czTx)
  • S08*UTC 15:30: Extended Q & A - (https://cns2020online.sched.com/event/czUj)

Moderators: James Roberts, Paula Sanz-Leon

Have a question? add them to our Q & A Space:
https://neurostars.org/t/workshop-spatiotemporal-dynamics-in-neuroimaging-models-and-analysis-q-a/7608?u=psanzleon



Moderators
JR

James Roberts

Team Head, QIMR Berghofer
avatar for Paula Sanz-Leon

Paula Sanz-Leon

Senior Research Officer, QIMR Berghofer

Speakers
avatar for Changsong Zhou

Changsong Zhou

Professor, Physics, Hong Kong Baptist University
Dr. Changsong Zhou, Professor, Department of Physics, Director of Centre for Nonlinear Studies, Hong Kong Baptist University (HKBU). Dr. Zhou’s research interest is dynamical processes on complex systems. His current emphasis is on analysis and modeling connectivity and activity... Read More →
avatar for Yujiang Wang

Yujiang Wang

Principal Investigator, Newcastle University
avatar for Patricio Orio

Patricio Orio

Full Professor, Universidad de Valparaíso
avatar for Katharina Glomb

Katharina Glomb

Department of Radiology, Centre Hospitalier Universitaire Vaudois
JG

John Griffiths

Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health
JV

Johan van der Meer

Research Officer, QIMR Berghofer




Tuesday July 21, 2020 2:00pm - 2:10pm CEST
Crowdcast (W05)

2:10pm CEST

W05 S01: Neural Flows: a toolbox for estimation of velocities, and sparse representation of whole-brain spatiotemporal wave dynamics.
Neural activity organizes in constantly evolving spatiotemporal patterns of activity, also known as brain waves (Roberts et al., 2019). Indeed, wave-like patterns have been observed across multiple neuroimaging modalities and across multiple spatiotemporal scales (Muller et al., 2016; Contreras et al. 1997; Destexhe et al. 1999). However, due to experimental constraints most attention has thus far been given to localised wave dynamics in the range of micrometers to a few centimeters, rather than at the global or large-scale that would encompass the whole brain. Existing toolboxes (Muller et al., 2016; Townsend et al., 2018) are geared particularly for 2D spatial domains (e.g., LFPs or VSDs on structured rectangular grids). No tool existed to study spatiotemporal waves naturally unfolding in 3D+t as recorded with different non-invasive neuroimaging techniques (e.g, EEG, MEG, and fMRI). In this talk, I will introduce our new toolbox neural-flows, which allows for the estimation of flows (i.e., velocities) & identification of 3D singularities, among other features. I will present a general overview of the theoretical background, general and architecture of the toolbox and, most importantly, applications to real data.

Send your questions to our Q & A Space: https://neurostars.org/t/workshop-spatiotemporal-dynamics-in-neuroimaging-models-and-analysis-q-a/7608?u=psanzleon

Speakers
avatar for Paula Sanz-Leon

Paula Sanz-Leon

Senior Research Officer, QIMR Berghofer


Tuesday July 21, 2020 2:10pm - 2:30pm CEST
Crowdcast (W05)

2:30pm CEST

W05 S02: Hierarchical Connectome Modes and Critical State Jointly Maximize Human Brain Functional Diversity
The brain requires diverse segregated and integrated processing to perform normal functions in
terms of anatomical structure and self-organized dynamics with critical features, but the
fundamental relationships between the complex structural connectome, critical state and
functional diversity remain unknown. Herein, we extend eigenmode analysis to investigate the
joint contribution of hierarchical modular structural organization and critical state to brain
functional diversity. We show that the structural modes inherent to the hierarchical modular
structural connectome allow a nested functional segregation and integration across multiple
spatiotemporal scales. The real brain hierarchical modular organization provides large structural
capacity for diverse functional interactions, which are generated by sequentially activating and
recruiting the hierarchical connectome modes, and the critical state can best explore the
capacity to maximize the functional diversity. Our results reveal structural and dynamical
mechanisms that jointly support a balanced segregated and integrated brain processing with
diverse functional interactions, and they also shed light on dysfunctional segregation and
integration in neurodegenerative diseases and neuropsychiatric disorders.

Send your questions to our Q & A Space: https://neurostars.org/t/workshop-spatiotemporal-dynamics-in-neuroimaging-models-and-analysis-q-a/7608?u=psanzleon

Speakers
avatar for Changsong Zhou

Changsong Zhou

Professor, Physics, Hong Kong Baptist University
Dr. Changsong Zhou, Professor, Department of Physics, Director of Centre for Nonlinear Studies, Hong Kong Baptist University (HKBU). Dr. Zhou’s research interest is dynamical processes on complex systems. His current emphasis is on analysis and modeling connectivity and activity... Read More →


Tuesday July 21, 2020 2:30pm - 3:00pm CEST
Crowdcast (W05)

3:00pm CEST

W1 S0: Methods of Information Theory in Comptuational Neuroscience - Opening Remarks
Workshop on Methods of Information Theory in Computational Neuroscience
Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience.
A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited.
The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work.
The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

Speakers
avatar for Joseph Lizier

Joseph Lizier

Associate Professor, Centre for Complex Systems, The University of Sydney
My research focusses on studying the dynamics of information processing in biological and bio-inspired complex systems and networks, using tools from information theory such as transfer entropy to reveal when and where in a complex system information is being stored, transferred and... Read More →


Tuesday July 21, 2020 3:00pm - 3:05pm CEST
Crowdcast (W01)

3:00pm CEST

W05 S03: Spatio-temporal dynamics in epilepsy: Models & Analysis
Epilepsy is a brain disorder characterised by sudden, unexpected seizures which disrupt normal brain function. Patients often show abnormalities on EEG and MRI.
I will begin my talk with some observations on the spatio-temporal dynamics of epileptic seizures from intracranial EEG and present a simple dynamical (neural population) model that explain these observations.

I will continue to show that these observations must be refined to be clinically useful on an individual patient basis. To refine the observations, I will show some recent analyses of spatio-temporal dynamics during epileptic seizures, but also in the interictal periods between seizures in individual patients.

Our analyses demonstrate that spatio-temporal brain dynamics contain useful information to support clinical treatment of individual patients. However, to make this important translational step, I will argue that it will be crucial to consider patient-specific modulations that fluctuate on different time-scales.

References:
Schroeder, Gabrielle M., et al. "Seizure pathways change on circadian and slower timescales in individual patients with focal epilepsy." Proceedings of the National Academy of Sciences (2020): 11048-11058.

Wang, Yujiang, et al. "Interictal intracranial electroencephalography for predicting surgical success: The importance of space and time." Epilepsia.

Wang, Yujiang, et al. "Mechanisms underlying different onset patterns of focal seizures." PLoS computational biology (2017): e1005475.

Send your questions to our Q & A Space: https://neurostars.org/t/workshop-spatiotemporal-dynamics-in-neuroimaging-models-and-analysis-q-a/7608?u=psanzleon

Speakers
avatar for Yujiang Wang

Yujiang Wang

Principal Investigator, Newcastle University


Tuesday July 21, 2020 3:00pm - 3:30pm CEST
Crowdcast (W05)

3:00pm CEST

W2 S07: Online methods for real-time analysis of calcium imaging data
Calcium imaging methods enable researchers to measure the activity of genetically-targeted large-scale neuronal populations. Whereas earlier methods required the specimen to be stable, e.g. anesthetized or head-fixed, new brain imaging techniques using microendoscopic lenses and miniaturized microscopes have enabled deep brain imaging in freely moving mice. Previously, a constrained matrix factorization approach (CNMF) has been suggested to extract the activity of the imaged neuronal sources. It has been extended further to handle the very large background fluctuations in microendoscopic data (CNMF-E). However, both approaches rely on offline batch processing of the entire video data and are demanding both in terms of computing and memory requirements, in particular CNMF-E. Moreover, in some scenarios we want to perform experiments in real-time and closed-loop -- analyzing data on-the-fly to guide the next experimental steps or to control feedback --, and this calls for new methods for accurate real-time processing. Here we address both issues by introducing an online framework for the analysis of streaming calcium imaging data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Extending previous work on online dictionary learning and calcium imaging data analysis, we first present online adaptations of the CNMF as well as the CNMF-E algorithm, which dramatically reduces memory and computation requirements. Secondly, we propose an algorithm that uses a convolution-based background model for microendoscopic data that enables even faster (real-time) processing on GPU hardware. We apply our algorithms on a variety of experimental datasets that employ 2-photon, light-sheet, and microendoscopic imaging techniques, and show that they yield similar high-quality results as the popular offline approaches, but outperform them with regard to computing time and memory requirements. Our algorithms enable faster and scalable analysis and open the door to new closed-loop experiments.



Speakers
avatar for Johannes Friedrich

Johannes Friedrich

Associate Research Scientist, Flatiron Institute


Tuesday July 21, 2020 3:00pm - 3:30pm CEST
Crowdcast (W02)

3:05pm CEST

W1 S1: Inferring what to do
Workshop on Methods of Information Theory in Computational Neuroscience

Thomas Parr
University College London

In recent years, the "planning as inference" paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about "how I am going to act". In this talk, I will overview some the factors that contribute to this prior - through the lens of active inference. These are rooted in optimal experimental design, information theory, and statistical decision making. The first part of the talk summarises the principles that underwrite active inference and motivates the question of how we formulate prior beliefs about how to act. The second part unpacks this in terms of exploitative and explorative behaviours. Finally, we consider the implementation of behavioural policies in terms of movement, the neuronal message passing that underwrites this, and the computational pathologies that result from aberrant priors.

Speakers
TP

Thomas Parr

University College London


Tuesday July 21, 2020 3:05pm - 3:50pm CEST
Crowdcast (W01)

3:30pm CEST

W3 S0: Information-Theoretic Models in Psychology and Neuroscience (Opening remarks)
Models of information theory describe the behavior and neural dynamics in intelligent agents. They have arisen through fruitful interactions between mathematical psychology, cognitive neuroscience and other fields. However, opportunities for such interactions seem to be few at the moment. This workshop aims to fill this gap by bringing together researchers with different backgrounds but a common goal: to understand information processing in the human and animal brain.
The workshop will discuss information sampling, encoding and decoding during sensory processing, time perception and higher cognitive functions. It will review state of the art techniques based on deep neural networks, probabilistic inference and dynamical systems. It will also provide updates about recent results using these techniques to understand the biology and behavior of intelligent information processing.
The workshop will be of interest to members of the CNS community who are keen on model-driven explanations of sensory perception and higher cognition.



Speakers
DP

Dimitrios Pinotsis

Associate Professor & Research Affiliate, University of London - City & MIT
avatar for Randy Gallistel

Randy Gallistel

Professor Emeritus, Rutgers University
The application of information theory to associative learning and to the neurobiology of memory
EK

Earl K. Miller

Picower Professor of Neuroscience, MIT


Tuesday July 21, 2020 3:30pm - 3:35pm CEST
Crowdcast (W03)

3:30pm CEST

W05 S04: Dynamical richness emerging from fixed connectomes.
The brain possesses a highly fluctuating dynamic, with time scales much shorter than the changes in connectivity. Many factors are responsible for the emergence of this behavior, that favor a metastable or critical dynamic and not allowing the system to settle down in a single attractor.

Using numerical simulations of different models of large-scale brain activity, we are disentangling
the precise factors that contribute to the particular behavior of human brain-inspired networks.
We have shown that the human connectome contains a critical core of highly interconnected
nodes (s-core) that have the possibility of high and low activity states at a low value of global
connectivity weight G. As G is further increased, the recruitment of new nodes able to sustain high activity is fairly gradual, a sign of dynamical richness because the network can have multiple ‘ignited’ states. This gradual recruitment is lost when the s-core is disrupted by randomizing the network, even if some other topological features are conserved such as degree distribution or small-worldness. We are also studying the high-order interactions in the activity of the brain, that can be measured in rs-fMRI recordings and become more redundant (less synergistic) with aging. This result is also reproduced in mean-field models connected by the topology of the human connectome, and our simulations help us to understand the emergence of synergistic relationships from the structural connectivity and how this is related to the multistable dynamics.

Acknowledgments: Fondecyt 1181076, ANID-Basal FB0008, Instituto Milenio ICM-ANID P09-022-F

Send your questions to our Q & A Space: https://neurostars.org/t/workshop-spatiotemporal-dynamics-in-neuroimaging-models-and-analysis-q-a/7608?u=psanzleon

Speakers
avatar for Patricio Orio

Patricio Orio

Full Professor, Universidad de Valparaíso


Tuesday July 21, 2020 3:30pm - 4:00pm CEST
Crowdcast (W05)

3:30pm CEST

W2 S08: Multiphoton imaging of calcium signals in populations of hippocampal neurons during behaviour in mouse models of neurodegenerative disorders
The hippocampus plays an important role in learning, memory, and spatial navigation, impairments of which are typically among the first symptoms of Alzheimer’s Disease. Structural abnormalities, amyloid plaques, and aberrant neuronal excitability appear in disease-affected hippocampi, resulting in abnormal activity visible through multiphoton calcium imaging of the hippocampus. We imaged calcium signals in populations of CA1 hippocampal neurons in 5xFAD transgenic mice and wild-type littermates following viral transduction of the hippocampus with hSyn1-GCaMP6s-mRuby. Mice were head-fixed and trained to run along a circular linear track lined with visuotactile cues, floating on an air table (Neurotar Ltd). The three-dimensional distribution of amyloid plaques was mapped following i.p. injection of Methoxy-X04 by acquiring depth stacks at 720 nm excitation. On each imaging session on subsequent days, calcium signals were monitored in several hundred CA1 neurons within 500 x 500 µm field of view. We took advantage of the activity-independent red channel (mRuby) information for motion-correction using a non-rigid deformation algorithm. Imaging files were collected in 4 min sections, for ~20 min periods in which the mouse ran in a single environment; several environments were imaged per session. Regions of interest corresponding to individual neurons were cross-registered across files, environments, sessions and days, within some cases cells being tracked through imaging sessions for up to 14 days. We were able to observe features of CA1 activity reminiscent of electrophysiological recordings in freely moving animals, including well-defined place fields, phase-precession of place fields, and place field remapping. In contrast, place fields in a two-dimensional version of the task were impoverished, presumably due to the lack of vestibular input in the head-fixed preparation. In ongoing work, we are using this preparation to study the circuit basis of impairments in learning and memory in the 5xFAD model, as well as to test therapeutic strategies.

Speakers

Tuesday July 21, 2020 3:30pm - 4:00pm CEST
Crowdcast (W02)

3:35pm CEST

W3 S1: Connecting Perceptual Bias and Discriminability with Power-Law Efficient Codes
Recent work from Wei & Stocker (2017) proposed a new "perceptual law" relating perceptual bias and discrimination threshold. This law was shown to arise under an information-theoretically optimal allocation of Fisher Information in a neural population. In this talk, I will discuss recent work with Mike Morais that generalizes and extends these results. Specifically, we show that the same law arises under a much larger family of optimal neural codes, which we call "power-law efficient codes". This family includes neural codes that are optimal for minimizing L_p error for any p, indicating that the lawful relationship observed in human psychophysical data does not require information-theoretically optimal neural codes. Moreover, our framework provides new insights into “anti-Bayesian” perceptual biases, in which percepts are biased away from the center of mass of the prior. Power-law efficient codes provide a unifying framework for understanding the relationship between perceptual bias, discriminability, and the allocation of neural resources.

Speakers
avatar for Jonathan Pillow

Jonathan Pillow

Professor, Princeton University


Tuesday July 21, 2020 3:35pm - 4:05pm CEST
Crowdcast (W03)

3:50pm CEST

W1 S2: A differentiable measure of pointwise shared information
Workshop on Methods of Information Theory in Computational Neuroscience

Abdullah Makkeh
University of Goettingen

Partial information decomposition (PID) of the multivariate mutual information describes the distinct ways in which a set of source variables contains information about a target variable. The groundbreaking work of Williams and Beer has shown that this decomposition can not be determined from classic information theory without making additional assumptions, and several candidate measures have been proposed, often drawing on principles from related fields such as decision theory. None of these measures is differentiable with respect to the underlying probability mass function. We here present a novel measure that draws only on the principle linking the local mutual information to exclusion of probability mass. This principle is foundational to the original definition of the mutual information by Fano. We reuse this principle to define a measure of shared information based on the shared exclusion of probability mass by the realizations of source variables. Our measure is differentiable and well-defined for individual realizations of the random variables. Thus, it lends itself for example to local learning in artificial neural networks. We show that the measure can be interpreted as local mutual information with the help of an auxiliary variable. We also show that it has a meaningful Moebius inversion on a redundancy lattice and obeys a target chain rule. We give an operational interpretation of the measure based on the decisions that an agent should take if given only the shared information. (Makkeh et al. arXiv/2002.03356)

Speakers
avatar for Abdullah Makkeh

Abdullah Makkeh

PostDoc, University of Goettingen


Tuesday July 21, 2020 3:50pm - 4:35pm CEST
Crowdcast (W01)

4:00pm CEST

W05 S05: Tracking fast spatiotemporal dynamics in EEG with network harmonics
Understanding the brain as a network of interconnected nodes is the concept at the heart of connectomics. There have been many fascinating insights into the structure of this network and how it supports brain function in terms of the signals that researchers are able to measure, such as the BOLD signal in fMRI or electromagnetic activity in M/EEG. In the latter case, one can potentially gain insight into the time scale (milliseconds) relevant for behavior and neural events, especially when signals recorded on the scalp are successfully projected into the gray matter. However, the low spatial resolution and signal-to-noise ratio make it difficult to apply connectomics approaches to this kind of data. In our recent work, we integrate white matter connectivity data with high-density EEG recordings using a well-suited analysis framework called graph signal processing (GSP). GSP allows us to extract basis functions of the white matter connectivity - called "network harmonics" - and use them as building blocks for EEG activity patterns. In my talk, I will introduce this method including some of its links to harmonic modes in other areas of science. I will show how it yields a sparse representation of the EEG signal which allows us to track fast spatio-temporal dynamics over the course of a simple visual task.

Send your questions to our Q & A Space: https://neurostars.org/t/workshop-spatiotemporal-dynamics-in-neuroimaging-models-and-analysis-q-a/7608?u=psanzleon



Speakers
avatar for Katharina Glomb

Katharina Glomb

Department of Radiology, Centre Hospitalier Universitaire Vaudois


Tuesday July 21, 2020 4:00pm - 4:30pm CEST
Crowdcast (W05)

4:00pm CEST

W2 S09: Synaptic plasticity rules with physiological calcium levels
Like many forms of long-term synaptic plasticity, spike-timing-dependent plasticity (STDP) depends on intracellular Ca2+ signaling for its induction. Yet, all in vitro studies devoted to STDP used abnormally high external Ca2+ concentration. Using a combination of experimental (patch-clamp recording and Ca2+ imaging at CA3-CA1 synapses) and theoretical (Ca2+ based plasticity model) approaches, we show here that the classic STDP rules in which pairs of single pre- and post-synaptic action potentials induce synaptic modifications is not valid in the physiological Ca2+ range. Rather, we found that these pairs of single stimuli are unable to induce any synaptic modification in physiological conditions. Plasticity can only be triggered when bursts of postsynaptic spikes are used, or when neurons fire at sufficiently high frequency. In conclusion, the STDP rule is profoundly altered in physiological Ca2+ but specific activity regimes restore a classical STDP profile.

Speakers
JA

Johnatan Aljadeff

Assistant Professor, University of California, San Diego


Tuesday July 21, 2020 4:00pm - 4:30pm CEST
Crowdcast (W02)

4:00pm CEST

W4 S1: MOOSE: Multiscale Object-Oriented Simulation Environment

Speakers
avatar for Upinder Bhalla

Upinder Bhalla

Professor, NCBS/TIFR
Multiscale modelling of neurons especially in synaptic plasticity: including chemical and electrical signaling, traffic and mechanical changes. Tool development for all of these, including GENESIS, MOOSE, FindSim and more.


Tuesday July 21, 2020 4:00pm - 4:30pm CEST
Crowdcast (W04)

4:10pm CEST

W3 S2: Complexity, information, and statistical inference by humans
All animals face the challenge of making inferences about current and future states of the world from uncertain sensory information. One might think that animals would perform better in such tasks by using more complex algorithms and models to extract and process pertinent information. But, in fact, theory predicts circumstances where simpler models of the world are more effective than complex ones, even if the latter more closely approximates the truth.  Using information theory, we demonstrate this point in two ways.  First, we show that when data is sparse or noisy, less complex inferred models give better predictions for the future. In this form of Occam's razor, a model family is more complex if it has more parameters, describes a greater number of distinguishable models, or is more sensitive in its parameter dependence.  Second, even in situations where complex models give better predictions, cognitive and computational costs typically grow with complexity, subjecting the models to a law of diminishing returns.  To conclude, we present experimental results showing that human inference behavior matches our theoretical predictions.

Speakers
VB

Vijay Balasubramanian

Professor of Physics, University of Pennsylvania


Tuesday July 21, 2020 4:10pm - 4:40pm CEST
Crowdcast (W03)

4:30pm CEST

W05 S06: Modulation of corticothalamic rhythmogenic circuits in depressed patients by rTMS neurostimulation therapy
This session is part of W05: Spatiotemporal Dynamics in Neuroimaging: Models and Analysis

Repetitive transcranial magnetic stimulation (rTMS) is routinely used in the clinic as an alternative therapy for patients with treatment-resistant depression. Very little is known, however, about the physiological basis of rTMS effects, and how these relate to alleviation of symptoms. In this talk I summarize our recent work examining the influence of rTMS on spatiotemporal brain dynamics using an established model of EEG rhythm generation in the corticothalamic system. Comparison of parameter estimates from models fitted to patient EEG data before and after rTMS therapy yields putative physiological changes induced by the intervention. In particular, we find statistically significant reductions in excitatory corticothalamic gains in models fitted to post- as compared to pre- stimulation therapy resting EEG data. Interestingly, these modulations extend well beyond the primary stimulation site in the frontal lobe, indicating a key role for large-scale networks in the transmission of modulatory rTMS effects. Projection to the model’s reduced 3-dimensional parameter space allows interpretation of these rTMS-induced changes in terms of the principal instabilities, and associated spectral signatures, of corticothalamic activity.

Send your questions to our Q & A Space: https://neurostars.org/t/workshop-spatiotemporal-dynamics-in-neuroimaging-models-and-analysis-q-a/7608?u=psanzleon

Speakers
JG

John Griffiths

Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health


Tuesday July 21, 2020 4:30pm - 5:00pm CEST
Crowdcast (W05)

4:30pm CEST

W2 S10: Calcium as trigger of synaptic plasticity
Multiple stimulation protocols have been found to be effective in changing synaptic efficacy by inducing long-term potentiation or depression. In many of those protocols, increases in postsynaptic calcium concentration have been shown to play a crucial role. Here, we discuss a calcium-based model of a synapse in which potentiation and depression are activated whenever calcium crosses distinct thresholds. We show that this model gives rise to a large diversity of spike-timing-dependent plasticity curves, most of which have been observed experimentally in different systems. Moreover, we use the model to investigate synaptic changes elicited by in vivo-like firing, where cells fire irregularly, and the timing between pre- and postsynaptic spikes varies. We show that the influence of spike-timing on plasticity is weaker than expected from regular stimulation protocols. The model provides a mechanistic understanding of how various stimulation protocols provoke specific synaptic changes through the dynamics of calcium concentration and thresholds implementing in simplified fashion protein signaling cascades, leading to long-term potentiation and long-term depression. 

Speakers
avatar for Michael Graupner

Michael Graupner

Research Scientist, CNRS, Université de Paris


Tuesday July 21, 2020 4:30pm - 5:00pm CEST
Crowdcast (W02)

4:30pm CEST

4:45pm CEST

W3 S3: Neural Circuits Underlying Bayesian Inference in Time Perception
Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous, is corrupted by the influence of noise and is inadvertently transformed as it traverses through neural circuitry. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. However, we know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. In the second part of my talk, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation and production of timing behaviours.

Speakers
DN

Devika Narain

Assistant Professor, Erasmus MC


Tuesday July 21, 2020 4:45pm - 5:15pm CEST
Crowdcast (W03)

5:00pm CEST

W05 S07: Functional reorganisation of brain dynamics during naturalistic movie watching assessed with Hidden Markov Modeling
This session is part of W05: Spatiotemporal Dynamics in Neuroimaging: Models and Analysis
Description TBA



Speakers
JV

Johan van der Meer

Research Officer, QIMR Berghofer


Tuesday July 21, 2020 5:00pm - 5:30pm CEST
Crowdcast (W05)

5:00pm CEST

5:00pm CEST

W4 S3: Arbor Simulator

Speakers
avatar for Ben Cumming

Ben Cumming

Group Lead, Scientific Software and Libraries, CSCS
I am a developer of Arbor, a simulation tool for networks of multi-compartment cells, designed for efficient simulation on HPC systems. Talk to me about simulation, HPC and model description formats.


Tuesday July 21, 2020 5:00pm - 5:30pm CEST
Crowdcast (W04)

5:00pm CEST

W1 S3: Using lossy representations to find the neural code?
Workshop on Methods of Information Theory in Computational Neuroscience

Sarah Marzen
Claremont Colleges

Using lossy representations to find the neural code?

One of the major questions in neuroscience centers around definition and extraction of the neural code. I will talk about this problem in the abstract, drawing on rate-distortion theory (a less-used branch of information theory) to define the neural code, and will then describe a new method that may allow for practical extraction of the neural code from data.

Speakers
SM

Sarah Marzen

Assistant Professor of Physics, Pitzer, Scripps, and Claremont McKenna College
information theory and dynamical systems, with an emphasis on prediction


Tuesday July 21, 2020 5:00pm - 5:45pm CEST
Crowdcast (W01)

5:00pm CEST

W10: Dissecting the role of interneurons in mnemonic functions using computational modelling approaches
GABAergic interneurons comprise one of the main types of cells in the mammalian nervous system. They play a critical role in learning and memory processes via inhibition and disinhibition pathways. Interneurons exhibit a variety of structural, molecular, electrophysiological and connectivity features. This high degree of variability makes it quite challenging to delineate their role in mnemonic functions through current experimental approaches. Computational modeling approaches, on the other hand, are a prominent tool used to predict their contribution to acquisition, storage and retrieval of information. The aim of this workshop is to present the latest computational work that highlights the function of interneurons in learning and memory processes. Additionally, we will actively discuss the next steps on how modeling approaches, from single cell to network models level, would benefit future research on interneurons as pertaining to mnemonic functions.
Organizers:
Alexandra Tzilivaki (Chair) Contact: aletzil10@gmail.com / alexandra.tzilivaki@charite.de Twitter: @ATzilivaki
Dr.Spiros Chavlis

Speakers:
Prof. Dr. Roger D. Traub
Prof. Dr. Frances Skinner
Prof. Dr. Wilten Nicola
Dr. Everton Agnes (Vogels lab)
Dr. Jiannis Taxidis



Speakers
avatar for Wilten Nicola

Wilten Nicola

Professor, University of Calgary
avatar for Roger D. Traub

Roger D. Traub

Professor, IBM Thomas J. Watson Research Center, AI Foundations USA.
Roger Traub has been “Einstein Visiting Fellow” since December 2010 and works within the cluster of excellence “Neurocure” at the Charité. He studied mathematics at Princeton University, New Jersey, USA, and then completed his medical studies at the University of Pennsylvania. Roger... Read More →
avatar for Frances Skinner

Frances Skinner

Senior Scientist and Professor, Krembil Research Institute, University Health Network, and University of Toronto
TITLE: Getting the most out of computational models of hippocampal interneuronsABSTRACT: The large variety and diversity of inhibitory cells in our brains makes it challenging to understand brain dynamics and function in general. In particular, using mathematical, computational models... Read More →
avatar for Jiannis Taxidis

Jiannis Taxidis

Postdoctoral Fellow, UCLA
Title: Excitation-inhibition interactions govern the dynamics of spiking sequences and their replays: Combining modeling, electrophysiology and calcium imaging.Abstract: During wakefulness, hippocampal networks generate spiking sequences that encode sensory cues and internally tile... Read More →
EA

Everton Agnes (Vogels lab)

Senior Research Associate, Oxford University


Tuesday July 21, 2020 5:00pm - 9:15pm CEST
Crowdcast (W10)

5:15pm CEST

W10-1. Getting the most out of computational models of hippocampal interneurons
ABSTRACT: The large variety and diversity of inhibitory cells in our brains makes it challenging to understand brain dynamics and function in general. In particular, using mathematical, computational models to gain insight is challenged by the high-dimensional and nonlinear character of the models. To tackle this, over the years, we have used a strategy of careful and tight interfacing between model development, goals and rationale with experiment.
In this workshop talk, I will describe our work involving the development and use of two types of hippocampal internal neurons in the context of theta rhythms - the oriens-lacunosum / molecular cell and the internal neuron-specific 3 cell.


Speakers
avatar for Frances Skinner

Frances Skinner

Senior Scientist and Professor, Krembil Research Institute, University Health Network, and University of Toronto
TITLE: Getting the most out of computational models of hippocampal interneuronsABSTRACT: The large variety and diversity of inhibitory cells in our brains makes it challenging to understand brain dynamics and function in general. In particular, using mathematical, computational models... Read More →


Tuesday July 21, 2020 5:15pm - 6:00pm CEST
Crowdcast (W10)

5:30pm CEST

W05 S08: extended Q & A time
We are scheduling this session to address questions that may have not been answered during the individual talks.

Have a question, send it to our Q & A Space:
https://neurostars.org/t/workshop-spatiotemporal-dynamics-in-neuroimaging-models-and-analysis-q-a/7608?u=psanzleon


Moderators
JR

James Roberts

Team Head, QIMR Berghofer
avatar for Paula Sanz-Leon

Paula Sanz-Leon

Senior Research Officer, QIMR Berghofer

Tuesday July 21, 2020 5:30pm - 6:00pm CEST
Crowdcast (W05)

5:30pm CEST

5:45pm CEST

W1 S4: Motifs for processes on networks
Workshop on Methods of Information Theory in Computational Neuroscience

Alice Schwarze
University of Washington

The study of motifs in networks can help researchers uncover links between structure and function of networks in biology, ecology, neuroscience, and many other fields. To connect the study of motifs in networks (which is common, e.g., in biology and the social sciences) with the study of motifs in dynamical processes (which is common in neuroscience), we propose to distinguish between "structure motifs" (i.e., graphlets) in networks and "process motifs" (i.e., structured sets of walks) on networks. Using as examples the covariances and correlations in a multivariate Ornstein--Uhlenbeck process on a network, we demonstrate that the distinction between structure motifs and process motifs makes it possible to gain new, quantitative insights into mechanisms that contribute to important functions of dynamical systems on networks.

Speakers
avatar for Alice Schwarze

Alice Schwarze

University of Washington


Tuesday July 21, 2020 5:45pm - 6:15pm CEST
Crowdcast (W01)

6:00pm CEST

W06 S3: Identifying the molecular signatures of CA1 projection classes
The CA1 region of the hippocampus has been involved in learning, memory, and spatial representations. CA1 cells are also the main output of the hippocampal region. CA1 reconstructions suggest different projection types. In this workshop I will present how we investigate CA1 combining previous CA1 reconstructions, retrograde injections and Allen Brain data to understand the different CA1 projection types.

Speakers
avatar for Gabriela Michel

Gabriela Michel

Postdoctoral Research Associate, Janelia Research Campus


Tuesday July 21, 2020 6:00pm - 6:30pm CEST
Crowdcast (W06)

6:00pm CEST

6:00pm CEST

W10-2 Cellular mechanisms of the visual EEG alpha rhythm.
Abstract: using a combination of in vitro slice and computational methods, we describe a population rhythm at about 10 Hz in primary visual cortex, but not present in neighboring cortical areas. This rhythm appears after a period of activation by kainate (corresponding to visual stimulation), followed by blockade of AMPA receptors and h-current (corresponding to removal of stimulation). The rhythm is generated by layer 4 pyramidal neurons (not by spiny stellate cells), interacting via NMDA receptors with rapid kinetics. Synaptic inhibition is present but does not determine the alpha period. Layer 4 alpha propagates to other cortical layers and acts functionally to disconnect spiny stellates from both deep and superficial pyramidal neurons.

Link to published paper: https://www.nature.com/articles/s42003-020-0947-8https://www.nature.com/articles/s42003-020-0947-8



Speakers
avatar for Roger D. Traub

Roger D. Traub

Professor, IBM Thomas J. Watson Research Center, AI Foundations USA.
Roger Traub has been “Einstein Visiting Fellow” since December 2010 and works within the cluster of excellence “Neurocure” at the Charité. He studied mathematics at Princeton University, New Jersey, USA, and then completed his medical studies at the University of Pennsylvania. Roger... Read More →


Tuesday July 21, 2020 6:00pm - 6:45pm CEST
Crowdcast (W10)

6:15pm CEST

W1 S5: Inference of topology and the nature of synapses in neuronal networks
Workshop on Methods of Information Theory in Computational Neuroscience

Fernando da Silva Borges
Federal University of ABC

"Inference of topology and the nature of synapses in neuronal networks"

The characterization of neuronal connectivity is one of the most important matters in neuroscience. In this work, we show that a recently proposed informational quantity, the causal mutual information, employed with an appropriate methodology, can be used not only to correctly infer the direction of the underlying physical synapses, but also to identify their excitatory or inhibitory nature, considering easy to handle and measure bivariate time series. The success of our approach relies on a surprising property found in neuronal networks by which non adjacent neurons do "understand" each other (positive mutual information), however, this exchange of information is not capable of causing effect (zero transfer entropy). Remarkably, inhibitory connections, responsible for enhancing synchronization, transfer more information than excitatory connections, known to enhance entropy in the network. We also demonstrate that our methodology can be used to correctly infer directionality of synapses even in the presence of dynamic and observational Gaussian noise, and is also successful in providing the effective directionality of intermodular connectivity, when only mean fields can be measured.

Speakers
avatar for Fernando S Borges

Fernando S Borges

Postdoctoral research, Department of Physiology & Pharmacology, SUNY Downstate Health Sciences
Research in computational neuroscience with 28 publications in peer reviewed journals, and PI/co-PI in 7 research grants. Lectured undergraduate courses, and organized Courses on Computational Modeling. Investigates neural network models with research mainly focused on neuronal synchronization... Read More →


Tuesday July 21, 2020 6:15pm - 6:45pm CEST
Crowdcast (W01)

6:15pm CEST

W3 S4: Working Memory 2.0
Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (> 35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.

Speakers
EK

Earl K. Miller

Picower Professor of Neuroscience, MIT


Tuesday July 21, 2020 6:15pm - 6:45pm CEST
Crowdcast (W03)

6:30pm CEST

W4 Discussion 1: Where should we be in 5 years and what tools/resources are missing?
General discussion open to everyone. Suggested topics:

      • Where will/should neuronal simulation technology be in 5 years?
      • What tools/resources are currently missing in the field?    

Moderators
KD

Kael Dai

Mindscope Program at the Allen Institute, Seattle, USA
avatar for Salvador Dura-Bernal

Salvador Dura-Bernal

Assistant Professor, State University of New York (SUNY) Downstate
avatar for Padraig Gleeson

Padraig Gleeson

University College London, UK

Tuesday July 21, 2020 6:30pm - 7:00pm CEST
Crowdcast (W04)

6:45pm CEST

6:45pm CEST

W10-3 Excitation-inhibition interactions govern the dynamics of spiking sequences and their replays: Combining modeling, electrophysiology and calcium imaging.
During wakefulness, hippocampal networks generate spiking sequences that encode sensory cues and internally tile the gaps in space or time between them, forming memory maps of temporally related experiences. During sleep, the same networks generate sharp wave ripples (SWRs), i.e. synchronous high frequency oscillations that harbor fast-scale replays of those sequences, involved in memory consolidation. What population dynamics and excitation-inhibition interactions govern the generation of awake sequences or their SWR replays? I will first present a network model of hippocampal areas CA3-CA1 that accurately captures SWR features and reveals that strong, fast-decaying, recurrent inhibition is crucial for generating these oscillations. This network coupled with a model of cortical slow oscillations (i.e. <1 Hz transitions between activated UP and silent DOWN states during sleep) demonstrates that UP states regulate SWR generation, through the excitation-inhibition balance they induce via multiple synaptic pathways. Can these reactivations be detected extracellularly? Extending the CA3-CA1 network into a biophysically realistic
model of local field potentials (LFP) during SWRs, combined with silicon probe recordings in rats demonstrate that the spiking patterns of CA1 spiking ensembles are reflected in spatiotemporal patterns of multi-site LFP. Finally, through recent in vivo two-photon
calcium imaging on CA1 of head-fixed mice, I will show that CA1 networks combine stable and flexible representations with different learning-related dynamics which may be a crucial feature for the hippocampus to construct maps of fixed external cues as well as their changing temporal relationships.

Speakers
avatar for Jiannis Taxidis

Jiannis Taxidis

Postdoctoral Fellow, UCLA
Title: Excitation-inhibition interactions govern the dynamics of spiking sequences and their replays: Combining modeling, electrophysiology and calcium imaging.Abstract: During wakefulness, hippocampal networks generate spiking sequences that encode sensory cues and internally tile... Read More →


Tuesday July 21, 2020 6:45pm - 7:30pm CEST
Crowdcast (W10)

6:50pm CEST

W3 S5: Reconciling the Spatial and Mnemonic Views of the Hippocampus
For decades, our understanding of the hippocampus has been framed by two landmark discoveries: the discovery by Scoville and Millner that hippocampal damage causes profound and persistent amnesia and the discovery by O’Keefe and Dostrovsky of hippocampal place cells in rodents. However, it has been unclear to what extent spatial representations are present in the primate brain and how to reconcile these representations with the known mnemonic function of this region. I will discuss a series of experiments that have examined neural activity in the hippocampus and adjacent entorhinal cortex in monkeys performing behavioral tasks including spatial memory tasks in a virtual environment. These data demonstrate that spatial representations can be identified in the primate hippocampus, and that behavioral task structure has a significant influence on hippocampal activity, with neurons responding to all salient events within the task. Together, these data are consistent with the idea that activity in the hippocampus tracks ongoing experience in support of memory formation.

Speakers
EA

Elizabeth A. Buffalo

Professor, University of Washington


Tuesday July 21, 2020 6:50pm - 7:20pm CEST
Crowdcast (W03)

7:00pm CEST

7:15pm CEST

7:25pm CEST

W3 S6: Characterising Sensory and Abstract Representations in Neural Ensembles
Many recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated tasks like decision-making are less used. At the same time, decision making tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these tasks could improve AI performance.
Here we modelled some of these dynamics. We investigated how they flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We used two different approaches for understanding neural representations. We compared brain responses to 1) the geometry of a sensory or category domain (domain selectivity) and 2) predictions from deep neural networks (computation selectivity). Both approaches gave us similar results. Using the first approach, we found that neural representations changed depending on context. We then trained deep recurrent neural networks to perform the same tasks as the animals. Using the second approach, we found that computations in different brain areas also changed flexibly depending on context. Both approaches yielded the same conclusions: decision making in the color domain appeared to rely more on sensory processing, while in the motion domain more on abstract representations. Finally, using biophysical modeling, and data from a spatial delayed response task, we characterized cortical connectivity in neural ensembles and explained a well-known behavioral effect in psychophysics, known as the oblique effect.
Overall, this talk will introduce an approach for studying the computations and neural representations taking place in neural ensembles by exploiting a combination of machine learning, biophysics and brain imaging.

Speakers
DP

Dimitrios Pinotsis

Associate Professor & Research Affiliate, University of London - City & MIT


Tuesday July 21, 2020 7:25pm - 7:55pm CEST
Crowdcast (W03)

7:30pm CEST

W4 S6: The NEURON Simulator

Speakers
avatar for Lia Eggleston

Lia Eggleston

Yale University
avatar for Robert McDougal

Robert McDougal

Assistant Professor, Yale University, USA
I'm an Assistant Professor in the Health Informatics division of Biostatistics, and a developer for NEURON and ModelDB. Computationally and mathematically, I'm interested in dynamical systems modeling and applications of machine learning and NLP to gain insights into the nervous system... Read More →


Tuesday July 21, 2020 7:30pm - 8:00pm CEST
Crowdcast (W04)

7:30pm CEST

W10-4 Learning complementary inhibitory weight profiles to allow flexible switching.
Multiple types of inhibitory interneurons are found in cortical areas, with stereotypical connectivity motifs that may follow specific plasticity rules. Yet, the combined effect of this diversity on postsynaptic dynamics has been largely unexplored. In this talk, I will present a simple circuit model with a single postsynaptic model neuron receiving tuned excitatory connections alongside inhibition from two plastic populations. In this circuit, synapses from each inhibitory population change according to distinct plasticity rules. I'll present results with different combinations of three rules: Hebbian, anti-Hebbian and homeostatic scaling. Depending on the inhibitory plasticity rule, synapses become unspecific (flat), anti-correlated to, or correlated with excitatory synapses. Crucially, the neuron's receptive field, i.e., its response to presynaptic stimuli, depends on the modulatory state of inhibition. When both inhibitory populations are active, inhibition balances excitation, resulting in uncorrelated postsynaptic responses regardless of the inhibitory tuning profiles, with only transient responses weakly revealing preferred inputs. Modulating the activity of a given inhibitory population produces strong correlations to either preferred or non-preferred inputs, in line with recent experimental findings that show dramatic context-dependent changes of neurons' receptive fields. These results confirm that a neuron's receptive field doesn't follow directly from the weight profiles of its presynaptic afferents, and illustrate how plasticity rules in various cell types can interact to shape cortical circuit motifs and their dynamics.


Speakers
EA

Everton Agnes

Senior Research Associate, Oxford University (Vogels lab)


Tuesday July 21, 2020 7:30pm - 8:15pm CEST
Crowdcast (W10)

7:45pm CEST

8:00pm CEST

8:15pm CEST

8:15pm CEST

W10-5 Fast, Compressible Learning in the Hippocampus using Interneuron Sequences
The hippocampus is able to rapidly learn incoming information, even if that information is only observed once. Furthermore, this information can be replayed in a compressed format in either forward or reverse modes during Sharp-Wave–Ripples (SPWRs). We leveraged state-of-the-art techniques in training recurrent spiking networks to demonstrate how primarily interneuron networks can achieve the following: (1) generate internal theta sequences to bind externally elicited spikes in the presence of inhibition from the medial septum; (2) compress learned spike sequences in the form of a SPWR when septal inhibition is removed; (3) generate and refine high-frequency assemblies during SPWR-mediated compression; and (4) regulate the inter-SPW interval timing between SPWRs in ripple clusters. From the fast timescale of neurons to the slow timescale of behaviors, interneuron networks serve as the scaffolding for one-shot learning by replaying, reversing, refining, and regulating spike sequences



Speakers
avatar for Wilten Nicola

Wilten Nicola

Professor, University of Calgary


Tuesday July 21, 2020 8:15pm - 9:00pm CEST
Crowdcast (W10)

8:30pm CEST

8:45pm CEST

 
Wednesday, July 22
 

9:00am CEST

W1 S6: Structure of information to understand the physical basis of consciousness
Workshop on Methods of Information Theory in Computational Neuroscience

Naotsugu Tsuchiya
Monash University

Structure of information to understand the physical basis of consciousness

One of the biggest mysteries in science is the origin of subjective conscious experience. In modern investigation on consciousness, researchers distinguish level and contents of consciousness. The former is about the global state of conscious creatures, which goes from very low in coma, vegetitative states, deep dreamless sleep, and deep general anesthesia to high in fully wakeful state. The latter is about the contents that one experiences at a given moment of high level of consciousness, sometimes called qualia, covering all sensory and any other experiences.
In both meanings, consciousness has been difficult to relate to electrochemical physical interactions in the brain. Meanwhile, informational structure, which is derived from these neural activity and connectivity, is more promising as a possible candidate that is isomorphic to consciousness.
In this talk, I will explain three approaches that try to characterize 1) structures of information, 2) structures of consciousness, and 3) relationship between these two structures, primarily drawing on the approach with Integrated Information Theory [Tononi 2004 BMC, Tononi 2016 Nat Rev Neuro, Oizumi 2016 PNAS, Haun 2018, Leung 2020 bioRxiv] and Category Theory [Spivak 2011, Tsuchiya 2016 Neurosci Res, Tsuchiya 2020 OSF].


Speakers
avatar for Naotsugu Tsuchiya

Naotsugu Tsuchiya

Monash University


Wednesday July 22, 2020 9:00am - 9:45am CEST
Crowdcast (W01)

9:45am CEST

W1 S7: Exact Inference of Linear Dependence Between Multiple Autocorrelated Time Series
Workshop on Methods of Information Theory in Computational Neuroscience

Oliver Cliff
The University of Sydney

Exact Inference of Linear Dependence Between Multiple Autocorrelated Time Series

Inferring linear dependence between time series is central to the study of dynamics, and has significant consequences for our understanding of natural and artificial systems. Unfortunately, traditional hypothesis tests often yield spurious associations (type I errors) or omit causal relationships (type II errors) when used to infer directed or multivariate dependencies in time-series data. Here we show that this problem is due to autocorrelation in the analysed time series -- a property that is ubiquitous across a diverse range of applications, from brain dynamics to climate change, and can be exacerbated by digital filtering. This insight enabled us to derive the first exact hypothesis tests for a large family of multivariate linear-dependence measures, including Granger causality and mutual information. Using numerical simulations and fMRI brain recordings, we show that our tests maintain the expected false-positive rate with minimally-sufficient samples, while demonstrating that asymptotic likelihood-ratio tests can induce unbounded statistical errors. Our findings suggest that many time-series dependencies in the scientific literature may have been, and may continue to be, spuriously reported or missed if our testing procedure is not widely adopted. (Cliff et al., arXiv:2003.03887)

Speakers
avatar for Oliver Cliff

Oliver Cliff

School of Physics, The University of Sydney


Wednesday July 22, 2020 9:45am - 10:30am CEST
Crowdcast (W01)

10:00am CEST

W2 S11: Modeling calcium signaling in live animals
The vast majority of previous experimental and theoretical work on calcium signalling has been in cell lines, cultured cells, or, more recently, in whole organs. The underlying assumption of these studies is that the mechanisms that control calcium signalling in a live animal are essentially similar, and one can extrapolate from one to the other. Although this assumption is, to a large extent, valid and useful, recent measurements of cytosolic calcium oscillations in salivary acinar cells from a live mouse have necessitated a major rethink of the mechanisms underlying whole-cell calcium responses and water transport in salivary cells. We shall present these new experimental data, and show how previous models have needed to be significantly modified in order to understand and explain these new results.

Speakers
JS

James Sneyd

Professor, University of Auckland
Calcium dynamics, mathematical modelling of physiological processes, cell signalling, dynamical systems, oscillations and travelling waves in physiology and neuroscience.


Wednesday July 22, 2020 10:00am - 10:30am CEST
Crowdcast (W02)

10:30am CEST

W2 S12: Calcium signalling in cellular space
Space – the not-so-final frontier. Cellular systems, ranging from atrial tissue, to gastro-intestinal cellular walls, and salivary glands, all exploit spatial distributions of signaling mechanisms. I will present investigations of arrangements for calcium (Ca2+) transporters within varied geometric configurations for several cell types, illustrating the importance for the cellular spatial frontier. For instance, arrangements of Ca2+ release sites (ryanodine receptor in the cardiac tissue, inositol trisphosphate in other tissues) lead to either waves of Ca2+ disrupting coupled electrical control, essential depletions of endoplasmic reticulum Ca2+ reservoirs triggering pacemaking contractions, or cell-wide spatially organised induction of ion and fluid transport. These studies were all performed computationally, exploiting the precision available to mathematical and computing methods complementing experimental techniques for teasing out the subtle and important influence of space.

Speakers

Wednesday July 22, 2020 10:30am - 11:00am CEST
Crowdcast (W02)

11:00am CEST

W1 S8: Developing and testing the concept of intersection information using PID
Workshop on Methods of Information Theory in Computational Neuroscience

Marco Celotto
Istituto Italiano di Tecnologia (IIT)

Developing and testing the concept of intersection information using PID

To crack the neural code used during a perceptual decision-making process, it is fundamental to determine not only how information about sensory stimuli (encoding stage) is encoded in neural activity, but also how this information is read out to inform the behavioral decision [1].
In previous work, our group used the concept of redundancy, as defined into the mathematical framework of Partial Information Decomposition (PID), to develop an information-theoretic measure capable of quantifying that part of the information which is at the intersection between the mutual information of the stimulus S and the neural response R, and the mutual information of R and the consequent behavioral choice C [2]. We called this measure "Information-theoretic intersection information" or II(S;R;C).
In this talk, we present our latest progress on how to use II(S;R;C) to study neural coding. We examine in detail its conceptual properties, and we show the results it provides both on simulated and on real neural data (the latter to test the role of spike timing in perceptual decision making). Furthermore, we discuss how to test the significance of the measure through a proper statistical null hypothesis.
[1] Panzeri et al. 2017 Neuron, [2] Pica et al. 2017 NIPS


Speakers
MC

Marco Celotto

PhD student, Istituto Italiano di Tecnologia (IIT)
I am a first-year PhD student in the Neural Computation Lab of Istituto Italiano di Tecnologia (IIT). I am interested in every aspect of theoretical and computational neuroscience.Up to this moment, my research mainly focused on defining and testing new information-theoretic measures... Read More →


Wednesday July 22, 2020 11:00am - 11:30am CEST
Crowdcast (W01)

11:00am CEST

11:30am CEST

W1 S9: Exploring relevant spatiotemporal scales for analyses of brain dynamics
Workshop on Methods of Information Theory in Computational Neuroscience

Xenia Kobeleva
University Hospital Bonn

Exploring relevant spatiotemporal scales for analyses of brain dynamics

Introduction: The brain switches between cognitive states at a high speed by rearranging interactions between distant brain regions. Using analyses of brain dynamics neuroimaging researchers were able to further describe this dynamical brain- behavior relationship. However, the diversity of methodological choices for the brain dynamics analyses impedes comparisons between studies of brain dynamics, reducing their reproducibility and generalizability. A key choice constitutes deciding on the spatiotemporal scale of the analysis, which includes both the number of regions (spatial scale) as well as the sampling rate (temporal scale). Choosing a suboptimal scale might either lead to loss of information or inefficient analyses with increase of noise. Therefore, the aim of this study was to assess the effect of different spatiotemporal scales on analyses of brain dynamics and to determine which spatiotemporal scale would retrieve the most relevant information on dynamic spatiotemporal patterns of brain regions.
Methods: We compared the effect of different spatiotemporal scales on the information content of the evolution of spatiotemporal patterns using empirical as well as simulated timeseries. Empirical timeseries were extracted from the Human connectome project [Van Essen et al., 2013]. We then created a whole-brain mean-field model of neural activity [Deco et al., 2013] resembling the key properties of the empirical data by fitting the global synchronization level and measures of dynamical functional connectivity. This resulted in different spatiotemporal with spatial scales from 100 to 900 regions and varying temporal scales from milliseconds to seconds. With a variation of an eigenvalue analysis [Deco et al., 2019], we estimated the number of spatiotemporal patterns over time and then extracted these patterns with an independent component analysis. The evolution of these patterns was then compared between scales in regard to the richness of switching activity (corrected for the number of patterns in total) using the measure of entropy. Given the probability of the occurrence of a pattern over time, we defined the entropy as a function of the probability of patterns.
Results: Using the entropy measure, we were able to specify both optimal and temporal scales for the evolution of spatiotemporal patterns (fig. 1). The entropy followed an inverted U-shaped function with the highest value at an intermediate parcellation of n = 300. The entropy was highest at a temporal scale of around 200 ms.
Conclusions and discussion: We have investigated which spatiotemporal scale contained the highest information content for brain dynamics analyses. By combining whole-brain computational modelling with an estimation of the number of resulting patterns, we were able to analyze whole-brain dynamics in different spatial and temporal scales. From a probabilistic perspective, we explored the entropy of the probability of resulting brain patterns, which was highest at a parcellation of n = 300. Our results indicate that although more spatiotemporal patterns with increased heterogeneity are found with higher parcellations, the most relevant information on brain dynamics is captured when using a spatial scale of n = 200 and a temporal scale of 200 ms. Our results therefore provide guidance for researchers on choosing the optimal spatiotemporal scale in studies of brain dynamics.

Speakers
avatar for Xenia Kobeleva

Xenia Kobeleva

Assistant Professor, Ruhr-University Bochum
Expert in clinical computational neuroscience - bringing models to the clinic. Happy to talk!


Wednesday July 22, 2020 11:30am - 12:00pm CEST
Crowdcast (W01)

11:30am CEST

W2 S14: Antipodes of calcium signalling: subcellular microdomains and whole cell calcium spikes
There is now compelling evidence that subcellular signalling microdomains are crucial for encoding and relaying calcium (Ca2+) signals. The STIM-Orai system constitutes a critical illustration of this concept. Upon depletion of the endoplasmic reticulum (ER), both proteins move to so-called ER-PM junctions, where the plasma membrane (PM) closely apposes the ER membrane with a separation of approximately 15nm. I will present the first three-dimensional model of such ER-PM junctions and will show how the spatial organisation of Orai channels in the PM and Ca2+ pumps in the ER membrane shape the local Ca2+ signature in a non-trivial manner, which has direct consequences for downstream Ca2+ signalling. The coordination of Ca2+ increases in such microdomains can lead to whole cell Ca2+ oscillations. Given their intrinsic stochasticity, we have pursued a statistical analysis of Ca2+ spikes using concepts from stochastic point processes and Bayesian inference. I will show how we can quantify Ca2+ spiking at the single cell level in the presence of dynamic stimulation. This will not only move us closer to characterizing the role of cellular heterogeneity in Ca2+ signalling, but will also highlight how we can use whole-cell Ca2+ signals to infer properties of the subcellular Ca2+ signalling toolkit.

Speakers

Wednesday July 22, 2020 11:30am - 12:00pm CEST
Crowdcast (W02)

12:00pm CEST

W1 S10: Information flow under visual cortical magnification: Gaussianization estimates and theoretical results
Workshop on Methods of Information Theory in Computational Neuroscience

Jesus Malo
Universitat de Valencia

Information flow under visual cortical magnification: Gaussianization estimates and theoretical results

Computations done by individual neural layers along the visual pathway (e.g. opponency at chromatic channels and their saturation, spatial filtering and the nonlinearities of the texture sensors at visual cortex) have been suggested to be organized for optimal information transmission. However, the efficiency of these layers has not been measured when they operate together on colorimetrically calibrated natural images and using multivariate information-theoretic units over the joint array of spatio-chromatic responses.
In this work we present a statistical tool to address this question in an appropriate (multivariate) way. Specifically, we propose an empirical estimate of the information transmitted through a network based on a recent Gaussianization technique. Our Gaussianization reduces the challenging multivariate density estimation problem to a set of simpler univariate estimations. Here we extend our previous results [Gomez et al. J.Neurophysiol.2020, arxiv:1907.13046] and [J.Malo arxiv:1910.01559] to address the problem posed by cortical magnification. Cortical magnification implies an expansion of the dimensionality of the signal, and here the proposed total correlation estimator is compared to theoretical predictions that work in scenarios that do not preserve the dimensionality.
In psychophysically tuned networks with Poisson noise, and assuming sensors of equivalent signal/noise quality at different neural layers, results on transmitted information show that: (1) progressively deeper representations are better in terms of the amount of information captured about the input, (2) the transmitted information up to the cortical representation follows the PDF of natural scenes over the chromatic and achromatic dimensions of the stimulus space, (3) the contribution of spatial transforms to capture visual information is substantially bigger than the contribution of chromatic transforms, and (4) nonlinearities of the responses contribute substantially to the transmitted information but less than the linear transforms.
A. Gomez-Villa, M. Bertalmio and J. Malo (2020). Visual information flow in Wilson–Cowan networks. J. Neurophysiology.
J. Malo (2020). Spatio-Chromatic Information available from different Neural Layers via Gaussianization.


Speakers
JM

Jesus Malo

Universitat de Valencia


Wednesday July 22, 2020 12:00pm - 12:30pm CEST
Crowdcast (W01)

12:30pm CEST

W2 S15: Modelling concepts for IP3-induced Ca2+ signaling
The task of theory in cell biology is to predict behavior from system parameters. The general features of IP3-induced Ca2+ spiking we find experimentally are: (1) interspike intervals are random; (2) cell-to-cell variability is very large; (3) the agonist concentration-response relation of the average interspike interval is exponential; (4) the moment relation between the average interspike interval and its standard deviation is linear; (5) the moment relation and the agonist sensitivity are cell type and pathway-specific and not subject to cell variability. 
Identification of the mathematical structure to which a system corresponds is the first and most important step in the development of a theory. The mathematical (some say dynamical) structure corresponding to IP3-induced Ca2+ signaling is an array of noisy excitable (maybe bistable) elements coupled by a diffusion process. There is strong coupling within clusters and weak coupling between clusters. Global feedbacks and processes set long time scales. Starting from there, we can develop a simple theory showing the moment relation and its robustness properties. We find that on a single-cell level in several cases, long time scales may arise from small spike probabilities and not from slow processes, but slow processes are required to obtain a small coefficient of variation for the interspike interval.

Speakers

Wednesday July 22, 2020 12:30pm - 1:00pm CEST
Crowdcast (W02)

1:00pm CEST

W2 S16: Statistical analysis and data-driven modeling of type 1 and type 2 IP3R
Patch clamp recordings enable us to watch a single inositol-trisphosphate receptor (IP3R) in action. At first glance we only see that the IP3R opens and closes stochastically but a closer look reveals that the channel alternates between two different levels of activity – a highly active mode where the IP3R opens and closes frequently and a nearly inactive mode in which the channel is mostly closed. Applying statistical change point analysis to the most comprehensive single channel data set currently available highlights the importance of this observation: We find that the dynamics of the IP3R is entirely regulated by switching between these two modes. In order to build a mathematical model based on this underlying principle the hierarchical Markov model is developed and then fitted to type 1 and type 2 IP3R data for a wide range of concentrations of IP3R, Ca2+ and ATP. I will present this model and will especially emphasize the insights in the biophysics of the IP3R that were gained along the way.

Speakers
IS

Ivo Siekmann

Senior Lecturer, Liverpool John Moores University


Wednesday July 22, 2020 1:00pm - 1:30pm CEST
Crowdcast (W02)

1:30pm CEST

W2 S17: Modelling inositol trisphosphate receptor-mediated calcium oscillations
Oscillations in cytoplasmic calcium concentration ([Ca2+]), mediated by repetitive openings and closings of inositol trisphosphate receptor (IP3R) channels situated on the membrane of the endo/sarcoplasmic reticulum, have been found to be important for modulating many physiological processes. Decades of effort has been made to elucidate the mechanisms of such calcium oscillations, focusing on understanding (1) the dynamical properties of single IP3R in response to binding of various ligands, such as IP3, Ca2+ and ATP; and (2) how the IP3R dynamics contribute to the formation of calcium oscillations. In this talk, I will present how we used mathematical modelling and simulations to advance our understanding of the problems. The talk will be focused on three aspects: (1) the introduction of a mathematical model of IP3R which we established through fitting to single channel measurements; (2) stochastic simulation of localized calcium puffs using our IP3R model; and (3) deterministic approximation of stochastic modeling in the application of predicting calcium oscillations in airway smooth muscle cells. Finally, I will post some existing challenges for modeling calcium oscillations, which would inspire more constructive discussions at the workshop.

Speakers

Wednesday July 22, 2020 1:30pm - 2:00pm CEST
Crowdcast (W02)

2:45pm CEST

W7: Dynamics of Rhythm Generation: Role of Ionic Pumps, Exchangers, and Ion Homeostasis
The workshop is Moved to
Topic: W7 workshop Meeting room
Time: Jul 22, 2020 08:00 AM Eastern Time (US and Canada)

Join Zoom Meeting
https://zoom.us/j/99358614878?pwd=UzBwWXhkMWJGQnNPMFZ4Y2FaNmdydz09

Meeting ID: 993 5861 4878
Passcode: 871087



Speakers
avatar for Gareth Miles

Gareth Miles

Professor of Neuroscience, Head of School, School of Psychology and Neuroscience, University of St Andrews
avatar for Patrick Whelan

Patrick Whelan

Professor, University of Calgary, Hotchkiss Brain Institute, Calgary, Alberta, Canada
Spinal Cord NetworksDescending control of locomotionNeuromodulation
avatar for Ronald Calabrese

Ronald Calabrese

Department of Biology, Emory University
avatar for Gennady Cymbalyuk

Gennady Cymbalyuk

Professor, Neuroscience Institute, Georgia State University
GU

Ghanim Ullah

Associate Professor of Physics, University of South Florida


Wednesday July 22, 2020 2:45pm - 7:30pm CEST
Crowdcast (W07)

3:00pm CEST

W2 S18: Analyzing and modeling the kinetics and evolution of Ca2+-permeable β amyloid pores associated with Alzheimer’s disease
Extensive evidence implicates cation-permeable plasma membrane pores formed by oligomeric forms of β amyloid (Aβ) in cytotoxicity during Alzheimer’s disease (AD) [Ullah et al., PLoS One, 2015, 10(9); Demuro et al., J. Cell. Biol. 2011, 195(2):515-524]. We use total internal reflection fluorescence microscopy (TIRFM) to monitor the Ca2+ flux through these pores, revealing detailed information about their gating kinetics and time evolution. This massively parallel imaging technique provides simultaneous and independent recording from thousands of pores in a patch of membrane of living cells for extended time. Manual analysis of these data consisting of tens of thousands of image frames is very challenging. Thus, we developed a pipeline of computational tools to retrieve, analyze, and predict the behavior of these pores at extended timescales to shed light on their toxicity and better understand disease progression [Shah et al., Biophys. J. 2018, 115(1):9-21; Biophys. J. 2018, 114(3):291a]. Analyzing the imaging data includes detection of pores, generating their location maps, tracking their movement, retrieval of time series data for each pore, separating signal from noisy and drifting background, and extracting the key statistics about their gating kinetics and evolution. The information extracted from tens of thousands of pores is used to develop Markov chain models in order to understand and correlate their kinetics and long-term behavior with cytotoxicity. This talk will give an overview of the tools mentioned above, our current understanding of Aβ pores, and their implications for intracellular Ca2+ signaling.

Speakers
GU

Ghanim Ullah

Associate Professor of Physics, University of South Florida


Wednesday July 22, 2020 3:00pm - 3:30pm CEST
Crowdcast (W02)

3:00pm CEST

W3 S7: Deep Inference and Information Gain
In the cognitive neurosciences and machine learning, we have formal ways of understanding and characterising perception and decision-making; however, the approaches appear very different: current formulations of perceptual synthesis call on theories like predictive coding and Bayesian brain hypothesis. Conversely, formulations of decision-making and choice behaviour often appeal to reinforcement learning and the Bellman optimality principle. On the one hand, the brain seems to be in the game of optimising beliefs about how its sensations are caused; while, on the other hand, our choices and decisions appear to be governed by value functions and reward. Are these formulations irreconcilable, or is there some underlying imperative that renders perceptual inference and decision-making two sides of the same coin.

Speakers
avatar for Karl Friston

Karl Friston

Professor, University College London



Wednesday July 22, 2020 3:00pm - 3:30pm CEST
Crowdcast (W03)

3:00pm CEST

W1 S11: Synergistic information in a dynamical model implemented on the human structural connectome reveals spatially distinct associations with age
Workshop on Methods of Information Theory in Computational Neuroscience

Daniele Marinazzo
University of Ghent

Synergistic information in a dynamical model implemented on the human structural connectome reveals spatially distinct associations with age

In a previous study implementing the Ising model on a 2D lattice, we showed that the joint synergistic information shared by two variables on a target one peaks before the transition to an ordered state (critical point). Here we implemented the same model on individual structural connectomes, to answer these questions:
  • Does the synergy still peak before the critical point in a nonuniform network?
  • Are the hubs of structural connectivity also hubs of synergy?
  • Is there association with age?
We found that synergy still peaks before the critical temperature and that hubs of structural connectivity are not among the nodes towards which synergy is highest. Furthermore, using robust measures of association we found both positive and negative associations of synergy.        

Speakers
avatar for Daniele Marinazzo

Daniele Marinazzo

University of Ghent
I am a statistical physicist working mainly in the neurosciences. The research activity of my group focuses on methodological and computational aspects of neuroscience research, and on the dynamical networks subserving function.We develop new techniques for inferring connectivity... Read More →



Wednesday July 22, 2020 3:00pm - 3:45pm CEST
Crowdcast (W01)

3:00pm CEST

W7 S1: Modeling and experiments indicate a role for the Na/K pump in generation and regulation of bursting activity
The workshop is Moved to
Topic: W7 workshop Meeting room
Time: Jul 22, 2020 08:00 AM Eastern Time (US and Canada)

Join Zoom Meeting
https://zoom.us/j/99358614878?pwd=UzBwWXhkMWJGQnNPMFZ4Y2FaNmdydz09

Meeting ID: 993 5861 4878
Passcode: 871087


Speakers
avatar for Ronald Calabrese

Ronald Calabrese

Department of Biology, Emory University


Wednesday July 22, 2020 3:00pm - 3:45pm CEST
Crowdcast (W07)

3:30pm CEST

W2 S19: Enabling real-time brain-machine interfaces via tensor-based computing
While optical methods, genetically encoded fluorescence indicators, and optogenetics already enable fast readout and control of large neuronal populations using light, the lack of corresponding advances in computational algorithms have slowed progress. The fundamental challenge is to reliably extract spikes (neural activity) from fluorescence imaging frames at speeds surpassing the indicator dynamics. To meet these challenges, we devised a set of new algorithms that exploit tensor-based computing on accelerated hardware. We provide optimized motion correction, source extraction and spike detection operations, which for the first time operate at speeds comparable with brain internal communication. We evaluate these algorithms on ground truth data and large datasets, demonstrating reliable and scalable performance. This provides the computational substrates required to interface precisely large neuronal populations and machines in real-time, enabling new applications in neuroprosthetics, brain machine interfaces, and experimental neuroscience.

Speakers
avatar for Andrea Giovannucci

Andrea Giovannucci

Assistant Professor, UNC Chapel Hill


Wednesday July 22, 2020 3:30pm - 4:00pm CEST
Crowdcast (W02)

3:35pm CEST

W3 S8: The Magic of Neocortex: Pyramidal Cells that are Context-Sensitive Two-Point Processors as Seen by Three-Way Mutual Information Decomposition
Assuming life to be organised complexity the theory of coherent infomax specifies the objective of hierarchical abstraction in neocortex as maximising transmission of coherent information and minimising transmission of irrelevant information. This is shown to be possible in networks of local processors with receptive fields (RFs) that convey the information to be processed and contextual fields (CFs) that specify the context within which it is to be processed. A two-point activation function uses CF input to modulate transmission of information about integrated RF input. Learning rules for the RF and CF synapses derived analytically from that objective are a refined version of the BCM rule (Bull. Math. Biol. 2011, 73, 344-372). Many neocortical pyramidal cells can operate as two-point processors in which apical input functions as a contextual modulator. Contextual modulation can be quantified using three-way mutual information decomposition. It is distinct from all four elementary arithmetic operators, and has two key distinctive properties: asymmetry between the effects of RFs and CFs, with RFs being dominant; and increasing then decreasing amounts of synergy with increases in RF strength. Decompositions of the output of a multicompartmental model of a layer 5b pyramidal cell confirms identification of contextual modulation with apical input (Symmetry 2020, doi:10.3390/sym12050815). These findings have far-reaching implications for mental life (Neurosci. Consc. 2016, doi.org/10.1093/nc/niw015; Brain and Cognition 2017, 112, 39–53).

Speakers
BP

Bill Phillips

Emeritus Professor, University of Stirling


Wednesday July 22, 2020 3:35pm - 4:05pm CEST
Crowdcast (W03)

3:45pm CEST

W1 S12: Multi-target information decomposition and applications to integrated information theory
Workshop on Methods of Information Theory in Computational Neuroscience

Pedro Mediano
University of Cambridge

Multi-target information decomposition and applications to integrated information theory

The Partial Information Decomposition (PID) framework allows us to decompose the information that multiple source variables have about a single target variable. In its 10 years of existence, PID has spawned numerous theoretical and practical tools to help us understand and analyse information processing in complex systems. However, the asymmetric role of sources and target in PID hinders its application in certain contexts, like studying information sharing in multiple processes evolving jointly over time. In this talk we present a novel extension of the PID framework to the multi-target setting, which lends itself more naturally to the analysis of multivariate dynamical systems. This new decomposition is tightly linked with Integrated Information Theory, and gives us new analysis tools as well as a richer understanding of information processing in multivariate dynamical systems.

Speakers
avatar for Pedro Mediano

Pedro Mediano

Post-doctoral researcher, Department of Psychology, University of Cambridge


Wednesday July 22, 2020 3:45pm - 4:30pm CEST
Crowdcast (W01)

3:45pm CEST

W7 S2: Neuromodulation within the spinal cord - a tale of two rhythms
The workshop is Moved to
Topic: W7 workshop Meeting room
Time: Jul 22, 2020 08:00 AM Eastern Time (US and Canada)

Join Zoom Meeting
https://zoom.us/j/99358614878?pwd=UzBwWXhkMWJGQnNPMFZ4Y2FaNmdydz09

Meeting ID: 993 5861 4878
Passcode: 871087


Moderators
avatar for Patrick Whelan

Patrick Whelan

Professor, University of Calgary, Hotchkiss Brain Institute, Calgary, Alberta, Canada
Spinal Cord NetworksDescending control of locomotionNeuromodulation

Wednesday July 22, 2020 3:45pm - 4:30pm CEST
Crowdcast (W07)

4:00pm CEST

W2 S20: Calcium release through IP3 receptors equips cells with a fast way to reprogram intracellular calcium signals
Many intracellular calcium signals involve calcium release into the cytosol through inositol 1,4,5-trisphosphate (IP3) receptors (IP3Rs)/calcium channels. IP3Rs need to bind calcium and IP3 to become open. Thus, IP3R-mediated calcium signals can spread through calcium-induced calcium release (CICR) in which the calcium released through an open channel induces the opening of neighboring ones. IP3Rs, however, are also inhibited by high calcium concentrations. This implies that IP3Rs act as "coincidence" detectors where the timing between the relative increase of IP3 and calcium in their vicinity leads to signals that can propagate or remain spatially localized. The nature of the resulting signal has implications for the subsequent end response. Thus, IP3R-mediated calcium release equips cells with a fast way to reprogram their responses. In this talk I will show modeling results that highlight the role of this mechanism in synaptic plasticity. In particular, I will show how the co-existence of different types of changes in homo- and hetero-synapses induced by different protocols can be explained in terms of the differential way in which the IP3 and calcium concentrations increase and how this impacts on the resulting intracellular calcium signal being propagating or not.

Speakers
avatar for Silvina Ponce Dawson

Silvina Ponce Dawson

Professor, Silvina Ponce Dawson
I am a physicist doing research in biological physics. I am particularly interested in cell signaling and information processing in biological systems.


Wednesday July 22, 2020 4:00pm - 4:30pm CEST
Crowdcast (W02)

4:00pm CEST

W4 S9: PyNN modeling tool

Speakers
avatar for Andrew Davison

Andrew Davison

Neuroinformatics Group, Paris-Saclay Institute of Neuroscience, CNRS, Université Paris-Saclay, France


Wednesday July 22, 2020 4:00pm - 4:30pm CEST
Crowdcast (W04)

4:10pm CEST

W3 S9: Information, Anticipation and Dopamine
Adaptive behavior requires performing the right response in the right place at the right time. Stimuli in the environment provide information about when important events (e.g.rewards) are available as well as information about when they are not. These stimuli are informative to the extent that they signal a change in the rate at which events will occur. Information theoretic approaches show that a stimulus will generate anticipatory responding to the extent that it reduces uncertainty about when rewards will occur. Inhibition of anticipation at other times is also regulated by this information. Inhibition and excitation are thus two sides of the same coin and the currency is temporal information. We show that this information modulates behavior in both Pavlovian conditioning and operant discrimination learning. Furthermore, this learning is extremely rapid and dopamine activity tracks this temporal information about reward availability.

Speakers
avatar for Peter Balsam

Peter Balsam

Professor of Psychology, Barnard College, Columbia University
Learning and Motivation


Wednesday July 22, 2020 4:10pm - 4:40pm CEST
Crowdcast (W03)

4:30pm CEST

W9 S1 - Machine learning and mechanistic modeling for understanding brain in health and disease
Breakthrough technology developments in semi-automated, high-throughput data collection have enabled experimental neuroscientists to acquire more multiscale neural data than ever before. However, the neural origin of the patterns observed in the multiscale, multimodal datasets are often difficult to decipher. There is therefore a critical need for time- and cost-efficient approaches to analyze and  interpret the massive datasets to advance understanding of cellular and circuit-level origins of the observed neural dynamics in both health and disease, and to use the insights gained to develop new therapeutics. While machine learning is a powerful technique to integrate multimodal data, classical machine learning  techniques often ignore the fundamental laws of physics and may therefore result in non-physical solutions.  Multiscale modeling is a successful strategy to integrate multiscale, multiphysics data and unravel mechanisms that  explain the emergence of function. However, multiscale modeling alone often fails to efficiently combine large data sets  from different sources and different levels of resolution. This workshop aims to highlight research that bridges the disciplines of machine learning and multiscale modeling. Speakers are invited to address open questions, and discuss potential challenges and  limitations in several topical areas: differential equations, data-driven  approaches, and theory-driven approaches. This multidisciplinary perspective suggests that integrating machine learning and multiscale modeling can  provide new insight into disease mechanisms, help identify new targets or treatment strategies, and inform decision  making in the benefit of human health.  

Speakers
avatar for William W Lytton

William W Lytton

Professor, SUNY Downstate, USA
SN

Sam Neymotin

Research Scientist, Nathan Kline Institute for Psychiatric Research


Wednesday July 22, 2020 4:30pm - 4:45pm CEST
Crowdcast (W09)

4:30pm CEST

W2 S21: Exploring the origin of spatiotemporal patterns of glial calcium by a compartmental model of astrocytic physiology
The growing recognition of spatial compartmentalization of intracellular calcium signals in glial cells such as astrocytes, is regarded as a potential biophysical correlate for the many functions that those cells could fulfill. Nonetheless, we currently miss any understanding of the biophysical mechanisms underpinning such compartmentalization. We present work in progress on an in silico model of astrocytic calcium signaling that combines diffusion of calcium with reaction-diffusion of inositol 1,4,5-trisphosphate. Consideration of different elemental branching configurations allows identification by analytical and numerical approaches of a variety of constraints of cell’s anatomy for the emergence of spatially-confined calcium dynamics.

Speakers
avatar for Evan Cresswell-Clay

Evan Cresswell-Clay

Postdoc Researcher, National Institute of Health
I am a Postdoctoral Fellow in the Labratory of Biological Modeling at the NIDDK. I am currently working on data-driven modeling of tertiary protein structure. I utilize methods from information theory to infer spatial contacts from multiple sequence alignments. My PhD work was in... Read More →


Wednesday July 22, 2020 4:30pm - 5:00pm CEST
Crowdcast (W02)

4:30pm CEST

4:30pm CEST

W7 S3: Mechanisms harnessing Na/K pump dynamics into robust rhythm generation
The workshop is Moved to
Topic: W7 workshop Meeting room
Time: Jul 22, 2020 08:00 AM Eastern Time (US and Canada)

Join Zoom Meeting
https://zoom.us/j/99358614878?pwd=UzBwWXhkMWJGQnNPMFZ4Y2FaNmdydz09

Meeting ID: 993 5861 4878
Passcode: 871087


Speakers
avatar for Gennady Cymbalyuk

Gennady Cymbalyuk

Professor, Neuroscience Institute, Georgia State University


Wednesday July 22, 2020 4:30pm - 5:15pm CEST
Crowdcast (W07)

4:45pm CEST

W3 S10: Using Cumulative Coding Cost to Analyze and Understand Acquisition and Extinction
The Kullback-Leibler divergence, DKL(P||Q) gives the average cost of encoding a datum from P using a code optimized for data from Q. When P ≡ Q, CCC is distributed gam(.5,1). In Pavlovian acquisition, P is the distribution of reinforcements given the CS, while Q is the distribution given the context, C, and P~≡ Q. In extinction, P is the current distribution and Q the pre-extinction distribution, and again P~≡ Q. The distribution of the subject's inter-response intervals (iri) during the CS separates out from the distribution in its absence in the course of acquisition. The iri distribution separates out from the pre-extinction distribution in the course of extinction. The CCCs, computed reinforcement by reinforcement and response by response, with Bayesian estimates of P and Q parameters, enable us to compare the growing strength of the evidence for a CS-C reinforcement-rate difference during acquisition with the growing strength of the evidence for a behavioural reaction to this difference—and likewise in extinction. The CCC comparisons suggest that subjects may themselves rely on the CCC in adjusting their behaviour to novel or changed circumstances.

Speakers
avatar for Randy Gallistel

Randy Gallistel

Professor Emeritus, Rutgers University
The application of information theory to associative learning and to the neurobiology of memory


Wednesday July 22, 2020 4:45pm - 5:15pm CEST
Crowdcast (W03)

4:45pm CEST

5:00pm CEST

5:00pm CEST

W2 S22: Decoding cytosolic and mitochondrial calcium dynamics in astrocytes
We discovered that the transient opening of the mitochondrial permeability transition pore induces spatially restricted Ca2+ transients in astrocytes processes, providing a means to link astrocyte respiration rates and Ca2+ dependent effector pathways. However, the cross-talk between cytosolic and mitochondrial Ca2+ signals in astrocytes still remains elusive. To simultaneously record and characterize cytosolic and mitochondrial Ca2+ dynamics, we have developed novel transgenic mouse lines and AAV based viral approaches to express various fluorescent proteins as well as genetically encoded Ca2+ indicator in various astrocytic compartments. Additionally, to automatically segment mitochondria and study the structural and Ca2+dynamics, we developed a machine-learning-based algorithm called mito-CaSCaDe. Using 2-photon microscopy-based Ca2+ imaging, we found that mitochondria exhibit spontaneous fluctuations in matrix Ca2+ and the bath application of neuromodulators induced long-lasting Ca2+ transients in mitochondria. In this workshop, I will discuss our new results that are helping us to decipher the role of astrocyte Ca2+ signals in the cytosol and mitochondria in shaping astrocyte functions in the brain.


Speakers

Wednesday July 22, 2020 5:00pm - 5:30pm CEST
Crowdcast (W02)

5:00pm CEST

5:00pm CEST

W1 S13: Information Theory for Cognitive Modelling: Speculations and Directions
Workshop on Methods of Information Theory in Computational Neuroscience

Daniel Polani
University of Hertfordshire

Information Theory for Cognitive Modelling: Speculations and Directions

The talk will offer a mixed bag of ideas a considerations how information theory needs to be further developed to be useful and also how it is - already now - able to open routes for hypotheses about how cognition may be organized and may need to be organized in the brain.

Speakers
DP

Daniel Polani

Professor, University of Hertfordshire


Wednesday July 22, 2020 5:00pm - 5:45pm CEST
Crowdcast (W01)

5:15pm CEST

5:30pm CEST

W06 S10: Probing Synaptic Signaling with Open Physiology Datasets and Tools.
Speakers
avatar for Tim Jarsky

Tim Jarsky

Sr. Manager, Allen Institute for Brain Science


Wednesday July 22, 2020 5:30pm - 6:00pm CEST
Crowdcast (W06)

5:45pm CEST

5:45pm CEST

W1 S14: Dynamical modeling, decoding, and control of multiscale brain networks
Workshop on Methods of Information Theory in Computational Neuroscience

Maryam Shanechi
University of Southern California

Dynamical modeling, decoding, and control of multiscale brain networks

In this talk, I first discuss our recent work on modeling, decoding, and controlling multisite human brain dynamics underlying mood states. I present a multiscale dynamical modeling framework that allows us to decode mood variations for the first time and identify brain sites that are most predictive of mood. I then develop a system identification approach that can predict multiregional brain network dynamics (output) in response to electrical stimulation (input) toward enabling closed-loop control of brain network activity. Further, I demonstrate that our framework can uncover multiscale behaviorally relevant neural dynamics from hybrid spike-field recordings in monkeys performing naturalistic movements. Finally, the framework can combine information from multiple scales of activity and model their different time-scales and statistics. These dynamical models, decoders, and controllers can advance our understanding of neural mechanisms and facilitate future closed-loop therapies for neurological and neuropsychiatric disorders.

Speakers
MS

Maryam Shanechi

University of Southern California


Wednesday July 22, 2020 5:45pm - 6:30pm CEST
Crowdcast (W01)

5:45pm CEST

W7 S4: On the origin of ultraslow spontaneous Na+ oscillations in neonatal brain
The workshop is Moved to
Topic: W7 workshop Meeting room
Time: Jul 22, 2020 08:00 AM Eastern Time (US and Canada)

Join Zoom Meeting
https://zoom.us/j/99358614878?pwd=UzBwWXhkMWJGQnNPMFZ4Y2FaNmdydz09

Meeting ID: 993 5861 4878
Passcode: 871087


Moderators
GU

Ghanim Ullah

Associate Professor of Physics, University of South Florida

Wednesday July 22, 2020 5:45pm - 6:30pm CEST
Crowdcast (W07)

6:00pm CEST

6:00pm CEST

6:15pm CEST

W3 S11: Efficient compression and human semantic systems
How do languages assign meanings to words? In this talk, I will argue that efficient data compression is a fundamental principle underlying human semantic systems. Specifically, I will argue that languages compress meanings into words by optimizing the Information Bottleneck (IB) tradeoff between the complexity and accuracy of the lexicon, which can be derived from Shannon’s Rate–Distortion theory. This proposal has gained substantial empirical support in a series of recent studies using cross-linguistic data from several semantic domains, such as terms for colors and containers. I will show that (1) semantic systems across languages lie near the IB theoretical limit; (2) the optimal systems explain much of the cross-language variation, and provide a theoretical explanation for why empirically observed patterns of inconsistent naming and soft category boundaries are efficient for communication; (3) languages may evolve through a sequence of structural phase transitions along the IB theoretical limit; and (4) this framework can be used to generate efficient naming systems from artificial neural networks trained for vision, providing a platform for testing the interaction between neural perceptual representations and high-level semantic representations. These findings suggest that efficient compression may be a major force shaping the structure and evolution of human semantic systems, and may help to inform AI systems with human-like semantics.

Speakers
avatar for Noga Zaslavsky

Noga Zaslavsky

Postdoctoral Fellow, MIT


Wednesday July 22, 2020 6:15pm - 6:45pm CEST
Crowdcast (W03)

6:15pm CEST

6:30pm CEST

W1 S15: Wrap-up

Speakers
avatar for Joseph Lizier

Joseph Lizier

Associate Professor, Centre for Complex Systems, The University of Sydney
My research focusses on studying the dynamics of information processing in biological and bio-inspired complex systems and networks, using tools from information theory such as transfer entropy to reveal when and where in a complex system information is being stored, transferred and... Read More →


Wednesday July 22, 2020 6:30pm - 6:45pm CEST
Crowdcast (W01)

6:30pm CEST

W4 S13: CoreNEURON simulator
CoreNEURON is a optimized compute engine for the widely used NEURON simulator. In this presentation we will discuss how the NEURON simulator is being prepared for modern computing platforms from desktop to large supercomputer systmes. We will show you how can easily run your existing NEURON models with CoreNEURON. We also introduce newly designed NMODL compiler framework which can be used as a handy tool for parsing and analysing any MOD files.

https://github.com/BlueBrain/nmodl
https://github.com/BlueBrain/CoreNeuron

Speakers
PK

Pramod Kumbhar

HPC Architect, Blue Brain Project
Pramod Kumbhar is HPC Architect in Computing Division at Blue Brain Project. His focus is on the development of the NEURON/CoreNEURON simulator within the Blue Brain Project. He has a keen interest in domain specific languages (DSL) and modern compiler technologies. Happy to discuss... Read More →


Wednesday July 22, 2020 6:30pm - 7:00pm CEST
Crowdcast (W04)

6:30pm CEST

W7 S5: Activity-dependent regulation of locomotor rhythms by sodium-potassium pumps
The workshop is Moved to
Topic: W7 workshop Meeting room
Time: Jul 22, 2020 08:00 AM Eastern Time (US and Canada)

Join Zoom Meeting
https://zoom.us/j/99358614878?pwd=UzBwWXhkMWJGQnNPMFZ4Y2FaNmdydz09

Meeting ID: 993 5861 4878
Passcode: 871087


Moderators
avatar for Gareth Miles

Gareth Miles

Professor of Neuroscience, Head of School, School of Psychology and Neuroscience, University of St Andrews

Wednesday July 22, 2020 6:30pm - 7:15pm CEST
Crowdcast (W07)

6:50pm CEST

W3 S12: Probability Distortion Maximizes Mutual Information
In decision-making under risk (DMR) participants' choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, why do we systematically fail in our use of probability and relative frequency information? We propose a Bounded Log-Odds Model (BLO) of probability and relative frequency distortion based on three assumptions: (1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, (2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and 3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values. We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as eleven alternative models each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature. We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.

Speakers
LT

Laurence T. Maloney

Professor Of Psychology & Neural Science, NYU


Wednesday July 22, 2020 6:50pm - 7:20pm CEST
Crowdcast (W03)

7:00pm CEST

7:00pm CEST

W4 Discussion 2: Should detailed modeling be more widely used? should tools/standards converge or diversify?
General discussion open to everyone. Suggested topics:
      • Why is detailed neuronal modelling not more widely used in neuroscience? 
      • Should tools and standards converge/integrate or diversify more?

Moderators
KD

Kael Dai

Mindscope Program at the Allen Institute, Seattle, USA
avatar for Salvador Dura-Bernal

Salvador Dura-Bernal

Assistant Professor, State University of New York (SUNY) Downstate
avatar for Padraig Gleeson

Padraig Gleeson

University College London, UK

Wednesday July 22, 2020 7:00pm - 7:30pm CEST
Crowdcast (W04)

7:25pm CEST

W3 S13: Memory and Instincts As a Continuum of Information Storage
Information must be encoded through plasticity of latent biological states in the brain. These learning-induced changes can be referred to as memory ‘engrams’. Recent studies that employ novel molecular methodologies have implicated sparse populations of neurons as engram cells that contribute to the storage of distributed memory engrams. Memory engram technology has provided an unprecedented tool for the labelling and experimental manipulation of specific memory representations in the mouse. Memory engram technology integrates immediate early gene (IEG) labelling techniques with optogenetics to facilitate the activity-dependent tagging and reversible manipulation of components of specific memory engrams. Applying this methodology, experimental studies suggest that while short-term memories may be encoded as transient changes in neuronal excitability, long-term memories are formed by plasticity of microanatomical connectivity between engram cells. But memory is not the only form of information that guides adaptive behaviour, the other form is instinct. While memory and instinct are encoded by very different processes and plasticity mechanisms, the resultant form of instincts and long-term memories may be the same: embedded hard-wired connectivity patterns that store stable informational representation. Both memories and instincts enable the organism to make predictions about its environment, that then change with experience.  We propose that such ensembles can encode evolvable affordances of how animals interpret their environments.

Speakers
TR

Tomas Ryan

Assistant Professor, Trinity College Dublin


Wednesday July 22, 2020 7:25pm - 7:55pm CEST
Crowdcast (W03)

7:30pm CEST

7:30pm CEST

8:00pm CEST

8:00pm CEST

8:15pm CEST

8:30pm CEST

8:45pm CEST

9:15pm CEST

9:45pm CEST

10:15pm CEST

W9 S10 - Workshop Discussion
Workshop Discussion

Speakers
HA

Haroon Anwar

Nathan Kline Institute for Psychiatric Research
avatar for William W Lytton

William W Lytton

Professor, SUNY Downstate, USA
SN

Sam Neymotin

Research Scientist, Nathan Kline Institute for Psychiatric Research
KD

Kenji Doya

Professor, Okinawa Institute of Science and Technology Graduate University


Wednesday July 22, 2020 10:15pm - 11:15pm CEST
Crowdcast (W09)
 
Thursday, July 23
 

5:00pm CEST

W06 S16: From hypothesis to data: Open two-photon brain observatories in system neuroscience
Speakers
avatar for Jerome Lecoq

Jerome Lecoq

Allen Institute
Jérôme Lecoq joined the Allen Institute in 2015 to lead efforts in mapping cortical computation using in vivo two-photon microscopy in behaving animals. He brings 10 years of experience in in vivo microscopy working with rodents. Prior to joining the Allen Institute, Lecoq was... Read More →


Thursday July 23, 2020 5:00pm - 5:30pm CEST
Crowdcast (W06)

5:30pm CEST

6:00pm CEST

7:00pm CEST

7:30pm CEST

W06 S20:Building a pipeline to survey spiking activity across the mouse visual system
Speakers
JS

Josh Siegle

Mindscope Program at the Allen Institute, Seattle, USA


Thursday July 23, 2020 7:30pm - 8:00pm CEST
Crowdcast (W06)

8:00pm CEST

8:30pm CEST

 
  • Timezone
  • Filter By Date CNS*2020 Online Jul 18 -23, 2020
  • Filter By Venue Online
  • Filter By Type
  • Featured Talk
  • Keynote
  • Keynote Speaker Forum
  • Members' meeting
  • Oral
  • Party
  • Poster
  • Showcase
  • Tutorial
  • Workshop


Twitter Feed

Filter sessions
Apply filters to sessions.