Loading…
CNS*2020 Online has ended
Welcome to the Sched instance for CNS*2020 Online! Please read the instruction document on detailed information on CNS*2020.
Back To Schedule
Monday, July 20 • 7:00pm - 8:00pm
P66: Brain-computer interfaces using stereotactic electroencephalography: Identification of discriminative recording sites for decoding imagined speech

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Please join my meeting on Zoom:
Link: https://unimelb.zoom.us/j/91744967043?pwd=Mk5zanVnS2dNOEhhSGY3NTUxcFdidz09 
Password: 585617 


Kevin Meng, David Grayden, Mark Cook, Farhad Goodarzy
  
As part of the monitoring of medication-resistant epilepsy before resective surgeries, patients are implanted with electrocorticography (ECoG) electrode arrays placed on the surface of the cortex or stereotactic electroencephalography (SEEG) depth electrodes penetrating the cortex. Both recording modalities measure local field potentials (LFPs) from their respective target locations. The patients are occasionally recruited to voluntarily participate in brain-computer interface (BCI) research. In recent years, ECoG-based BCIs have demonstrated long-term reliable decoding of various cortical processes involved in mental imagery tasks. Despite similarities in terms of clinical application and decoding strategies, SEEG-based BCIs have been the focus of only a limited number of studies. While the sparsity of their cortical coverage represents a disadvantage, SEEG depth electrodes have the potential to target bilateral combinations of deeper brain structures that are inaccessible with ECoG [1].

Here, we propose a framework for SEEG-based BCIs to identify discriminative recording sites for decoding imagined speech. Three patients with epilepsy were implanted with 10 to 12 SEEG depth electrodes, each consisting of 8 to 15 recording sites. Electrode placement and duration of monitoring were solely based on the requirements of clinical evaluation. Signals were amplified and recorded at a sampling rate of 5 kHz. The task consisted of listening to utterances and producing overt and covert (imagined) utterances of a selection of 20 monosyllabic English words made up of all combinations of five consonant patterns (/b_t/, /m_n/, /r_d/, /s_t/, /t_n/) and four vowels (/æ/, /e/, /i:/, /u:/).

We determined the relative importance of recording sites based on classification accuracies obtained from features extracted at the corresponding electrode locations. Each trial was associated with a label (consonant pattern or vowel) and a set of features consisting of normalized log-transformed power spectral densities at different time points and selected frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz), beta (12-30 Hz), gamma 1 (30-45 Hz), gamma 2 (55-95 Hz), gamma 3 (105-145 Hz), and gamma 4 (155-195 Hz). A pair-wise classification model using logistic regression was used to predict the labels. Parameters were trained for different combinations of recording sites, as well as each condition (listening, overt, covert), patient, and pair of labels separately. The mean classification rate across all pairs of labels was calculated to quantify the discriminative power of individual and combined recording sites.

Our results consistently show across all patients that relevant depth electrodes for decoding imagined speech are found in both left and right superior temporal gyri. Anatomical analyses of these electrode locations revealed that recording sites in the grey matter were the most discriminative. This is in line with previous studies of speech BCIs [2,3]. In addition to providing a better understanding of the neural processes underlying imagined speech, our practical framework may be applied to reduce feature dimensionality and computational cost while improving accuracy in real-time SEEG-based BCI applications.

References
[1] Herff, C., Krusienski, D. J., & Kubben, P. (2020). The Potential of Stereotactic-EEG for Brain-Computer Interfaces: Current Progress and Future Directions. Frontiers in Neuroscience, 14, 123.
[2] Yi, H. G., Leonard, M. K., & Chang, E. F. (2019). The encoding of speech sounds in the superior temporal gyrus. Neuron, 102(6), 1096-1110.
[3] Martin, S., Brunner, P., Iturrate, I., Millán, J. D. R., Schalk, G., Knight, R. T., & Pasley, B. N. (2016). Word pair classification during imagined speech using direct brain recordings. Scientific reports, 6, 25803.

Speakers
KM

Kevin Meng

Biomedical Engineering, University of Melbourne



Monday July 20, 2020 7:00pm - 8:00pm CEST
Slot 03