Exploring Large Virtual Environments by Thoughts using a Brain-Computer Interface based on Motor Imagery and High-Level Commands

Fabien Lotte 1, * Aurélien Van Langhenhove 2 Fabrice Lamarche 2 Thomas Ernest 2 Yann Renard 2 Bruno Arnaldi 2 Anatole Lécuyer 2
* Corresponding author
2 BUNRAKU - Perception, decision and action of real and virtual humans in virtual environments and impact on real environments
IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires, ENS Cachan - École normale supérieure - Cachan, Inria Rennes – Bretagne Atlantique
Abstract : Brain-computer interfaces (BCI) are interaction devices which enable users to send commands to a computer by using brain activity only. In this paper, we propose a new interaction technique to enable users to perform complex interaction tasks and to navigate within large virtual environments (VE) by using only a BCI based on imagined movements (motor imagery). This technique enables the user to send high-level mental commands, leaving the application in charge of most of the complex and tedious details of the interaction task. More precisely, it is based on points of interest and enables subjects to send only a few commands to the application in order to navigate from one point of interest to the other. Interestingly enough, the points of interest for a given VE can be generated automatically thanks to the processing of this VE geometry. As the navigation between two points of interest is also automatic, the proposed technique can be used to navigate efficiently by thoughts within any VE. The input of this interaction technique is a newly designed self-paced BCI which enables the user to send 3 different commands based on motor imagery. This BCI is based on a fuzzy inference system with reject options. In order to evaluate the efficiency of the proposed interaction technique, we compared it with the state-of-the-art method during a task of virtual museum exploration. The state-of-the-art method uses low-level commands, which means that each mental state of the user is associated to a simple command such as turning left or moving forwards in the VE. In contrast, our method based on high-level commands enables the user to simply select its destination, leaving the application performing the necessary movements to reach this destination. Our results showed that with our interaction technique, users can navigate within a virtual museum almost twice as fast as with low-level commands, and with nearly twice less commands, meaning with less stress and more comfort for the user. This suggests that our technique enables to use efficiently the limited capacity of current motor imagery-based BCI in order to perform complex interaction tasks in VE, opening the way to promising new applications.
Complete list of metadatas

Cited literature [36 references]  Display  Hide  Download

https://hal.inria.fr/inria-00445614
Contributor : Fabien Lotte <>
Submitted on : Wednesday, April 28, 2010 - 11:31:04 AM
Last modification on : Thursday, May 9, 2019 - 4:16:06 PM
Long-term archiving on : Tuesday, September 14, 2010 - 4:34:28 PM

File

presence_2010_Museum_revisedDr...
Files produced by the author(s)

Identifiers

  • HAL Id : inria-00445614, version 1

Citation

Fabien Lotte, Aurélien Van Langhenhove, Fabrice Lamarche, Thomas Ernest, Yann Renard, et al.. Exploring Large Virtual Environments by Thoughts using a Brain-Computer Interface based on Motor Imagery and High-Level Commands. Presence: Teleoperators and Virtual Environments, Massachusetts Institute of Technology Press (MIT Press), 2010, 19 (1), pp.54-70. ⟨inria-00445614⟩

Share

Metrics

Record views

629

Files downloads

721