CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Back To Schedule
Sunday, July 19 • 8:00pm - 9:00pm
P27: Lessons from Artificial Neural Network for studying coding principles of Biological Neural Network

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Google Meet link: https://meet.google.com/mnb-ixfu-sff

If you miss the presentation or have further questions, please feel free to contact me! Thanks:)

Hyojin Bae
, Chang-eop Kim, Gehoon Chung

An individual neuron or neuronal population is conventionally said to be “selective” to a feature of stimulus if they differentially respond to the feature. Also, they are considered to encode certain information if decoding algorithms successfully predict a given stimulus or behavior from the neuronal activity. However, an erroneous assumption about the feature space could mislead the researcher about a neural coding principle. In this study, by simulating several likely scenarios through artificial neural networks (ANNs) and showing corresponding cases of biological neural networks (BNNs), we point out potential biases evoked by unrecognized features i.e., confounding variable.

We modeled an ANN classifier with the open-source neural network library Keras, running Tensorflow as backend. The model is composed of five hidden layers, dense connections and rectified linear activation. We added a dropout layer and l2-regularizer on each layer to apply penalties on layer activity during optimization. The model was trained with CIFAR-10 dataset and showed a saturated test set accuracy at about 53%. (the chance level accuracy = 10%) For a stochastic sampling of individual neuron’s activity from each deterministic unit, we generated the Gaussian distribution through modeling within-population variability according to each assumption.

Using this model, we showed 4 possible misinterpretation cases induced by a missing feature. (1). The researcher can choose the second-best feature which has similarity to ground truth feature. (2). An irrelative feature which correlated with ground truth feature can be chosen. (3). Evaluating decoder in incomplete feature space could result in the overestimation of the performance of the decoder. (4). Misconception about the receptive field of the unit could make a signal to be incorporated in noise.

In conclusion, we suggest that the comparative study of ANN and BNN from the perspective of machine learning can be a great strategy for deciphering the neural coding principle.

avatar for Hyojin Bae

Hyojin Bae

PhD student, Gachon university

Sunday July 19, 2020 8:00pm - 9:00pm CEST
Slot 11