Loading…
CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Back To Schedule
Sunday, July 19 • 9:00pm - 10:00pm
P59: Reverse engineering neural networks to identify their cost functions and implicit generative models

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
To join the video meeting, click this link: https://meet.google.com/yty-hjsf-psm

Takuya Isomura 1
, Karl Friston 2
1 Brain Intelligence Theory Unit, RIKEN Center for Brain Science. 2 Wellcome Centre for Human Neuroimaging, University College London.

It is widely recognised that maximising a variational bound on model evidence – or equivalently, minimising variational free energy – provides a unified, normative formulation of inference and learning [1]. According to the complete class theorem [2], any dynamics that minimises a cost function can be viewed as performing Bayesian inference; implying that any neural network whose activity and plasticity follow the same cost function is implicitly performing Bayesian inference. However, the implicit Bayesian model that corresponds to any given cost function is a more delicate problem. Here, we identify a class of biologically plausible cost functions for canonical neural networks of rate coding neurons, where the same cost function is minimised by both neural activity and plasticity [3]. We then demonstrate that such cost functions can be cast as variational free energy under an implicit generative model in the well-known form of partially observed Markov decision processes. This equivalence means that the activity and plasticity in a canonical neural network can be understood as approximate Bayesian inference and learning, respectively. Mathematical analysis shows that the firing thresholds – that characterise the neural network cost function – correspond to prior beliefs about hidden states in the generative model. This means that the Bayes optimal encoding of hidden states is attained when the network’s implicit priors match the process generating its sensory inputs. The theoretical formulation was validated using _in vitro_ neural networks comprising rat cortical cells cultured on a microelectrode array dish [4, 5]. We observed that _in vitro_ neural networks – that receive input stimuli generated from hidden sources – perform causal inference or source separation through activity-dependent plasticity. The learning process was consistent with Bayesian belief updating and the minimisation of variational free energy. Furthermore, constraints that characterise the firing thresholds were estimated from the empirical data to quantify the _in vitro_ network’s prior beliefs about hidden states. These results highlight the potential utility of reverse engineering generative models to characterise the neuronal mechanisms underlying Bayesian inference and learning.

References
1. Friston, K. (2010). The free-energy principle: a unified brain theory?. Nat. Rev. Neurosci. 11, 127-138. (link)
2. Wald, A. (1947). An essentially complete class of admissible decision functions. Ann. Math. Stat. 18, 549-555. (link)
3. Isomura, T. & Friston, K. (2020). Reverse engineering neural networks to characterise their cost functions. Neural Comput. In press. Preprint is available at https://www.biorxiv.org/content/10.1101/654467v2
4. Isomura, T., Kotani, K. & Jimbo, Y. (2015). Cultured cortical neurons can perform blind source separation according to the free-energy principle. PLoS Comput. Biol. 11, e1004643. (link)
5. Isomura, T. & Friston, K. (2018). In vitro neural networks minimise variational free energy. Sci. Rep. 8, 16926. (link)

Speakers
avatar for Takuya Isomura

Takuya Isomura

Unit leader, RIKEN Center for Brain Science



Sunday July 19, 2020 9:00pm - 10:00pm CEST
Slot 17