Loading…
CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Sunday, July 19 • 8:00pm - 9:00pm
P119: Relating transfer entropy to network structure and motifs, and implications for brain network inference

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
https://uni-sydney.zoom.us/j/92409687585

Zoom meeting ID: 92409687585

Leonardo Novelli, Joseph Lizier

 Transfer entropy is an established method for the analysis of directed relationships in neuroimaging data. In its original formulation, transfer entropy is a bivariate measure, i.e., a measure between a pair of elements or nodes [1]. However, when two nodes are embedded in a network, the strength of their direct coupling is not sufficient to fully characterize the transfer entropy between them. This is because transfer entropy results from network effects due to interactions between all the nodes.

In this theoretical work, we study the bivariate transfer entropy as a function of network structure, when the link weights are known. In particular, we use a discrete-time linear Gaussian model to investigate the contribution of small motifs, i.e., small subnetwork configurations comprising two to four nodes. Although the linear model is simplistic, it is widely used and has the advantage of being analytically tractable. Moreover, using this model means that our results extend to Granger causality, which is equivalent to transfer entropy for Gaussian variables.

We show analytically that the dependence of transfer entropy on the direct link weight is only a first approximation, valid for weak coupling. More generally, the transfer entropy increases with the in-degree of the source and decreases with the in-degree of the target, which suggests an asymmetry of information transfer between hubs and peripheral nodes.

Importantly, these results also have implications for directed functional network inference from time series, which is one of the main applications of transfer entropy in neuroscience. The asymmetry of information transfer suggests that links from hubs to peripheral nodes would generally be easier to infer than links between hubs, as well as links from peripheral nodes to hubs. This could bias the estimation of network properties such as the degree distribution and the rich-club coefficient.

In addition to the dependence on the in-degree, the transfer entropy is directly proportional to the weighted motifs involving common parents or multiple walks from the source to the target (Fig. 1). These motifs are more abundant in clustered or modular networks than in random networks, suggesting a higher transfer in the former case. Further, if the network has only positive edge weights, we have a positive correlation to the number of such motifs. This applies in the mammalian cortex (on average, since the majority of connections are thought to be excitatory) – implying that directed functional network inference with transfer entropy is better able to infer links within brain modules (where such motifs enhance transfer entropy values) in comparison to links across modules.

References:

1. Schreiber, T. Measuring Information Transfer. Physical Review Letters. 2000, vol. 85, no. 2, pp. 461–464 2. Novelli, L., Wollstadt, P., Mediano, P., Wibral, M., & Lizier, J. T. Large-scale directed network inference with multivariate transfer entropy and hierarchical statistical testing. Network Neuroscience. 2019, vol. 3, no. 3, pp. 827–847

Speakers
avatar for Leonardo Novelli

Leonardo Novelli

PhD Student, Centre for Complex Systems, The University of Sydney



Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 10