Location of an Inhabitant for Domotic Assistance Through Fusion of Audio and Non-Visual Data - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2011

Location of an Inhabitant for Domotic Assistance Through Fusion of Audio and Non-Visual Data

Résumé

In this paper, a new method to locate a person using multimodal non-visual sensors and microphones in a pervasive environment is presented. The information extracted from sensors is combined using a two-level dynamic network to obtain the location hypotheses. This method was tested within two smart homes using data from experiments involving about 25 participants. The preliminary results show that an accuracy of 90% can be reached using several uncertain sources. The use of implicit localisation sources, such as speech recognition, mainly used in this project for voice command, can improv e performances in many cases.
Fichier principal
Vignette du fichier
2011_PervasiveHealth_Chahuara_draft.pdf (311.77 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00953556 , version 1 (28-02-2014)

Identifiants

  • HAL Id : hal-00953556 , version 1

Citer

Pedro Chahuara, François Portet, Michel Vacher. Location of an Inhabitant for Domotic Assistance Through Fusion of Audio and Non-Visual Data. Pervasive Health, May 2011, Dublin, Ireland. pp.1-4. ⟨hal-00953556⟩
89 Consultations
97 Téléchargements

Partager

Gmail Facebook X LinkedIn More