Learning to recognize touch gestures: recurrent vs. convolutional features and dynamic sampling

Abstract : We propose a fully automatic method for learning gestures on big touch devices in a potentially multi-user context. The goal is to learn general models capable of adapting to different gestures, user styles and hardware variations (e.g. device sizes, sampling frequencies and regularities). Based on deep neural networks, our method features a novel dynamic sampling and temporal normalization component, transforming variable length gestures into fixed length representations while preserving finger/surface contact transitions, that is, the topology of the signal. This sequential representation is then processed with a convolutional model capable, unlike recurrent networks, of learning hierarchical representations with different levels of abstraction. To demonstrate the interest of the proposed method, we introduce a new touch gestures dataset with 6591 gestures performed by 27 people, which is, up to our knowledge, the first of its kind: a publicly available multi-touch gesture dataset for interaction. We also tested our method on a standard dataset of symbolic touch gesture recognition, the MMG dataset, outperforming the state of the art and reporting close to perfect performance.
Type de document :
Communication dans un congrès
Automatic Face and Gesture Recognition, May 2018, Xi'an, China
Liste complète des métadonnées

https://hal.inria.fr/hal-01720716
Contributeur : Christian Wolf <>
Soumis le : jeudi 1 mars 2018 - 14:56:27
Dernière modification le : jeudi 19 avril 2018 - 14:38:06

Lien texte intégral

Identifiants

  • HAL Id : hal-01720716, version 1
  • ARXIV : 1802.09901

Citation

Quentin Debard, Christian Wolf, Stéphane Canu, Julien Arné. Learning to recognize touch gestures: recurrent vs. convolutional features and dynamic sampling. Automatic Face and Gesture Recognition, May 2018, Xi'an, China. 〈hal-01720716〉

Partager

Métriques

Consultations de la notice

191