Towards a Fully Interpretable EEG-based BCI System

Fabien Lotte 1, * Anatole Lécuyer 2 Cuntai Guan 1
* Auteur correspondant
2 BUNRAKU - Perception, decision and action of real and virtual humans in virtual environments and impact on real environments
IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires, ENS Cachan - École normale supérieure - Cachan, Inria Rennes – Bretagne Atlantique
Abstract : Most Brain-Computer Interfaces (BCI) are based on machine learning and behave like black boxes, i.e., they cannot be interpreted. However, designing interpretable BCI would enable to discuss, verify or improve what the BCI has automatically learnt from brain signals, or possibly gain new insights about the brain. In this paper, we present an algorithm to design a fully interpretable BCI. It can explain what power in which brain regions and frequency bands corresponds to which mental state, using "if-then" rules expressed with simple words. Evaluations showed that this algorithm led to a truly interpretable BCI as the automatically derived rules were consistent with the literature. They also showed that we can actually verify and correct what an interpretable BCI has learnt so as to further improve it.
Liste complète des métadonnées
Contributeur : Fabien Lotte <>
Soumis le : mercredi 21 juillet 2010 - 08:51:06
Dernière modification le : mercredi 16 mai 2018 - 11:23:14
Document(s) archivé(s) le : vendredi 22 octobre 2010 - 16:11:51


Fichiers produits par l'(les) auteur(s)


  • HAL Id : inria-00504658, version 1


Fabien Lotte, Anatole Lécuyer, Cuntai Guan. Towards a Fully Interpretable EEG-based BCI System. 2010. 〈inria-00504658〉



Consultations de la notice


Téléchargements de fichiers