Google Meet link: https://meet.google.com/mnb-ixfu-sff
If you miss the presentation or have further questions, please feel free to contact me! Thanks:) qogywls1573@gmail.com
Hyojin Bae, Chang-eop Kim, Gehoon Chung
An individual neuron or neuronal population is conventionally said to be “selective” to a feature of stimulus if they differentially respond to the feature. Also, they are considered to encode certain information if decoding algorithms successfully predict a given stimulus or behavior from the neuronal activity. However, an erroneous assumption about the feature space could mislead the researcher about a neural coding principle. In this study, by simulating several likely scenarios through artificial neural networks (ANNs) and showing corresponding cases of biological neural networks (BNNs), we point out potential biases evoked by unrecognized features i.e., confounding variable.
We modeled an ANN classifier with the open-source neural network library Keras, running Tensorflow as backend. The model is composed of five hidden layers, dense connections and rectified linear activation. We added a dropout layer and l2-regularizer on each layer to apply penalties on layer activity during optimization. The model was trained with CIFAR-10 dataset and showed a saturated test set accuracy at about 53%. (the chance level accuracy = 10%) For a stochastic sampling of individual neuron’s activity from each deterministic unit, we generated the Gaussian distribution through modeling within-population variability according to each assumption.
Using this model, we showed 4 possible misinterpretation cases induced by a missing feature. (1). The researcher can choose the second-best feature which has similarity to ground truth feature. (2). An irrelative feature which correlated with ground truth feature can be chosen. (3). Evaluating decoder in incomplete feature space could result in the overestimation of the performance of the decoder. (4). Misconception about the receptive field of the unit could make a signal to be incorporated in noise.
In conclusion, we suggest that the comparative study of ANN and BNN from the perspective of machine learning can be a great strategy for deciphering the neural coding principle.