Location of an Inhabitant for Domotic Assistance Through Fusion of Audio and Non-Visual Data

Abstract : In this paper, a new method to locate a person using multimodal non-visual sensors and microphones in a pervasive environment is presented. The information extracted from sensors is combined using a two-level dynamic network to obtain the location hypotheses. This method was tested within two smart homes using data from experiments involving about 25 participants. The preliminary results show that an accuracy of 90% can be reached using several uncertain sources. The use of implicit localisation sources, such as speech recognition, mainly used in this project for voice command, can improv e performances in many cases.
Type de document :
Communication dans un congrès
Pervasive Health, May 2011, Dublin, Ireland. pp.1-4, 2011
Liste complète des métadonnées

Littérature citée [11 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-00953556
Contributeur : Michel Vacher <>
Soumis le : vendredi 28 février 2014 - 12:26:51
Dernière modification le : jeudi 11 octobre 2018 - 08:48:03
Document(s) archivé(s) le : mercredi 28 mai 2014 - 13:15:24

Fichier

2011_PervasiveHealth_Chahuara_...
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-00953556, version 1

Citation

Pedro Chahuara, François Portet, Michel Vacher. Location of an Inhabitant for Domotic Assistance Through Fusion of Audio and Non-Visual Data. Pervasive Health, May 2011, Dublin, Ireland. pp.1-4, 2011. 〈hal-00953556〉

Partager

Métriques

Consultations de la notice

522

Téléchargements de fichiers

91