Location of an Inhabitant for Domotic Assistance Through Fusion of Audio and Non-Visual Data

Abstract : In this paper, a new method to locate a person using multimodal non-visual sensors and microphones in a pervasive environment is presented. The information extracted from sensors is combined using a two-level dynamic network to obtain the location hypotheses. This method was tested within two smart homes using data from experiments involving about 25 participants. The preliminary results show that an accuracy of 90% can be reached using several uncertain sources. The use of implicit localisation sources, such as speech recognition, mainly used in this project for voice command, can improv e performances in many cases.
Document type :
Conference papers
Liste complète des métadonnées

Cited literature [11 references]  Display  Hide  Download

https://hal.inria.fr/hal-00953556
Contributor : Michel Vacher <>
Submitted on : Friday, February 28, 2014 - 12:26:51 PM
Last modification on : Monday, February 11, 2019 - 4:36:02 PM
Document(s) archivé(s) le : Wednesday, May 28, 2014 - 1:15:24 PM

File

2011_PervasiveHealth_Chahuara_...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00953556, version 1

Collections

Citation

Pedro Chahuara, François Portet, Michel Vacher. Location of an Inhabitant for Domotic Assistance Through Fusion of Audio and Non-Visual Data. Pervasive Health, May 2011, Dublin, Ireland. pp.1-4. ⟨hal-00953556⟩

Share

Metrics

Record views

526

Files downloads

102