Google Meet Link
Alternate Link
David Shorten,
Joseph Lizier,
Richard SpinneyTransfer entropy (TE) [1] is a measure of the flow of information between components in a system. It is defined as the mutual information between the past of a source and the present state of a target, conditioned on the past of the target. It has received widespread application in neuroscience [2], both for characterising information flows as well as inferring effective connectivity from data sources such as MEG, EEG, fMRI, calcium imaging and electrode arrays. Previous applications of TE to spike trains have relied on time discretisation, where the spike train is divided into time bins and the TE is estimated from the numbers of spikes occurring in each bin. There are, however, several disadvantages to estimating TE from time-discretised data [3]. Firstly, as time discretisation is a lossy transformation of the data, any estimator based on time discretisation is not consistent (It will not converge to the true value of the TE in the limit of infinite data). Secondly, whilst the loss of resolution of the discretisation will decrease with decreasing bin size, this requires larger dimensionality of the history embeddings to capture correlations over similar time intervals. This results in an exponential increase in the state space size being sampled and therefore the data requirements.
Recently, a continuous-time framework [3] for transfer entropy was developed. This framework has a distinct advantage in that it demonstrates that, for spike trains, the TE can be calculated solely from contributions occurring at spikes. This presentation reports on a newly developed continuous-time estimator for transfer entropy for spike trains which utilises this framework. Importantly, this new estimator is a consistent estimator of the TE. As it does not require time discretisation, it calculates the TE based on the raw interspike interval timings of the source and target neurons. Similar to the popular KSG estimator [4] for mutual information and TE, it performs estimation using the statistics of K-nearest-neighbour searches in the target and source history spaces. Tests on synthetic datasets of coupled and uncoupled point processes have confirmed that the estimator is consistent and has low bias. Similar tests of the time-discretised estimator have found it to not be consistent and have larger bias. The efficacy of the estimator is further demonstrated on the task of inferring the connectivity of biophyiscal models of the pyloric network of the crustacean stomatogastric ganglion. Granger causality (which is equivalent to TE under the assumption of Gaussian variables) has been shown to be incapable of inferring this particular network [5], although it was demonstrated that it could be inferred by a generalised linear model.
References1. Schreiber T. Measuring information transfer. Physical review letters. 2000, 85(2), 461
2. Wibral M, Vicente R, Lizier JT, editors. Directed information measures in neuroscience. Berlin: Springer; 2014
3. Spinney RE, Prokopenko M, Lizier JT. Transfer entropy in continuous time, with applications to jump and neural spiking processes. Physical Review E. 2017, 95(3), 032319
4. Kraskov A, Stögbauer H, Grassberger P. Estimating mutual information. Physical review E. 2004, 69(6), 066138.
5. Kispersky T, Gutierrez GJ, Marder E. Functional connectivity in a rhythmic inhibitory circuit using Granger causality. Neural systems & circuits. 2011, 1(1), 9