CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Back To Schedule
Monday, July 20 • 9:00pm - 10:00pm
P99: Integrated Model of Reservoir Computing and Autoencoder for Explainable Artificial Intelligence

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Hoon-Hee Kim

Due to the development of machine learning such as a deep neural network, Artificial Intelligence(AI) has been used in many areas. Modern AI technology accurately solves problems such as classification, regression, and prediction, but there is a lack of skill to explain the process of AI decision in terms of human understanding, it is called a black-box AI. The black-box AI, in which humans cannot understand the decision process, is difficult to use in high- risk areas such as important social and legal decisions, medical diagnosis, and financial predictions [1]. Although there are highly explainable machine learning methods such as a decision-tree, these machine learning methods tend to has a low performance and are not suitable for solving a complex problem [2]. In this study, I suggest a novel explainable AI method which has a high performance based on an integrated model of Reservoir Computing and Autoencoder. Reservoir Computing, a recurrent neural network consists of three layers: inputs, reservoir, and readouts can train nonlinear dynamics using linear learning methods [3]. Recently, a study was published in which neural networks induced actual physical laws using Variational Autoencoder which can extract interpretable features of the learning data [4]. In the integrated model, the features of the training data were learned by the autoencoder structure and linear learning rule of reservoir computing. Therefore, these features could be represented as a linear formula form that a human can simply understand. To validate the integrated model, I tested the model to predict trends of the S&P500 index. The model showed more than 80% accuracy and reported that which features were most important to the prediction in terms of weighted linear formula.


This study was supported by the National Research Foundation of Korea [NRF-2019R1A6A3A01096892].


1\. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence. 2019, 1(5), 206-215.

2\. Defense Advanced Research Projects Agency. Broad Agency Announcement, Explainable Artificial Intelligence (XAI), DARPA-BAA-16-53 (DARPA, 2016); https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf

3\. Jaeger, H. and H. Haas. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science. 2004, 304(5667), 78-80.

4\. Iten Raban, et al. Discovering Physical Concepts with Neural Networks. Physical Review Letters. 2020, 124, 010508-1-6


Hoon-Hee Kim

Korea Advanced Institute of Science and Technology

Monday July 20, 2020 9:00pm - 10:00pm CEST
Slot 15