Towards a Fully Interpretable EEG-based BCI System

Fabien Lotte 1, * Anatole Lécuyer 2 Cuntai Guan 1
* Corresponding author
2 BUNRAKU - Perception, decision and action of real and virtual humans in virtual environments and impact on real environments
IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires, ENS Cachan - École normale supérieure - Cachan, Inria Rennes – Bretagne Atlantique
Abstract : Most Brain-Computer Interfaces (BCI) are based on machine learning and behave like black boxes, i.e., they cannot be interpreted. However, designing interpretable BCI would enable to discuss, verify or improve what the BCI has automatically learnt from brain signals, or possibly gain new insights about the brain. In this paper, we present an algorithm to design a fully interpretable BCI. It can explain what power in which brain regions and frequency bands corresponds to which mental state, using "if-then" rules expressed with simple words. Evaluations showed that this algorithm led to a truly interpretable BCI as the automatically derived rules were consistent with the literature. They also showed that we can actually verify and correct what an interpretable BCI has learnt so as to further improve it.
Complete list of metadatas

https://hal.inria.fr/inria-00504658
Contributor : Fabien Lotte <>
Submitted on : Wednesday, July 21, 2010 - 8:51:06 AM
Last modification on : Thursday, May 9, 2019 - 4:16:06 PM
Long-term archiving on : Friday, October 22, 2010 - 4:11:51 PM

File

jne2010_lotte.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00504658, version 1

Citation

Fabien Lotte, Anatole Lécuyer, Cuntai Guan. Towards a Fully Interpretable EEG-based BCI System. 2010. ⟨inria-00504658⟩

Share

Metrics

Record views

734

Files downloads

216