Exploring Large Virtual Environments by Thoughts using a Brain-Computer Interface based on Motor Imagery and High-Level Commands

Fabien Lotte 1, * Aurélien Van Langhenhove 2 Fabrice Lamarche 2 Thomas Ernest 2 Yann Renard 2 Bruno Arnaldi 2 Anatole Lécuyer 2
* Auteur correspondant
2 BUNRAKU - Perception, decision and action of real and virtual humans in virtual environments and impact on real environments
IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires, ENS Cachan - École normale supérieure - Cachan, Inria Rennes – Bretagne Atlantique
Abstract : Brain-computer interfaces (BCI) are interaction devices which enable users to send commands to a computer by using brain activity only. In this paper, we propose a new interaction technique to enable users to perform complex interaction tasks and to navigate within large virtual environments (VE) by using only a BCI based on imagined movements (motor imagery). This technique enables the user to send high-level mental commands, leaving the application in charge of most of the complex and tedious details of the interaction task. More precisely, it is based on points of interest and enables subjects to send only a few commands to the application in order to navigate from one point of interest to the other. Interestingly enough, the points of interest for a given VE can be generated automatically thanks to the processing of this VE geometry. As the navigation between two points of interest is also automatic, the proposed technique can be used to navigate efficiently by thoughts within any VE. The input of this interaction technique is a newly designed self-paced BCI which enables the user to send 3 different commands based on motor imagery. This BCI is based on a fuzzy inference system with reject options. In order to evaluate the efficiency of the proposed interaction technique, we compared it with the state-of-the-art method during a task of virtual museum exploration. The state-of-the-art method uses low-level commands, which means that each mental state of the user is associated to a simple command such as turning left or moving forwards in the VE. In contrast, our method based on high-level commands enables the user to simply select its destination, leaving the application performing the necessary movements to reach this destination. Our results showed that with our interaction technique, users can navigate within a virtual museum almost twice as fast as with low-level commands, and with nearly twice less commands, meaning with less stress and more comfort for the user. This suggests that our technique enables to use efficiently the limited capacity of current motor imagery-based BCI in order to perform complex interaction tasks in VE, opening the way to promising new applications.
Type de document :
Article dans une revue
Presence: Teleoperators and Virtual Environments, Massachusetts Institute of Technology Press (MIT Press), 2010, 19 (1), pp.54-70
Liste complète des métadonnées

Littérature citée [36 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/inria-00445614
Contributeur : Fabien Lotte <>
Soumis le : mercredi 28 avril 2010 - 11:31:04
Dernière modification le : mardi 21 novembre 2017 - 15:22:39
Document(s) archivé(s) le : mardi 14 septembre 2010 - 16:34:28

Fichier

presence_2010_Museum_revisedDr...
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : inria-00445614, version 1

Citation

Fabien Lotte, Aurélien Van Langhenhove, Fabrice Lamarche, Thomas Ernest, Yann Renard, et al.. Exploring Large Virtual Environments by Thoughts using a Brain-Computer Interface based on Motor Imagery and High-Level Commands. Presence: Teleoperators and Virtual Environments, Massachusetts Institute of Technology Press (MIT Press), 2010, 19 (1), pp.54-70. 〈inria-00445614〉

Partager

Métriques

Consultations de la notice

523

Téléchargements de fichiers

492