Is it time to switch to Word Embedding and Recurrent Neural Networks for Spoken Language Understanding?

Vedran Vukotic 1 Christian Raymond 1 Guillaume Gravier 1
1 LinkMedia - Creating and exploiting explicit links between multimedia fragments
Inria Rennes – Bretagne Atlantique , IRISA-D6 - MEDIA ET INTERACTIONS
Abstract : Recently, word embedding representations have been investigated for slot filling in Spoken Language Understanding, along with the use of Neural Networks as classifiers. Neural Networks , especially Recurrent Neural Networks, that are specifically adapted to sequence labeling problems, have been applied successfully on the popular ATIS database. In this work, we make a comparison of this kind of models with the previously state-of-the-art Conditional Random Fields (CRF) classifier on a more challenging SLU database. We show that, despite efficient word representations used within these Neural Networks, their ability to process sequences is still significantly lower than for CRF, while also having a drawback of higher computational costs, and that the ability of CRF to model output label dependencies is crucial for SLU.
Type de document :
Communication dans un congrès
InterSpeech, Sep 2015, Dresde, Germany
Liste complète des métadonnées

Littérature citée [19 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01196915
Contributeur : Christian Raymond <>
Soumis le : jeudi 10 septembre 2015 - 15:58:19
Dernière modification le : mercredi 2 août 2017 - 10:10:13
Document(s) archivé(s) le : mardi 29 décembre 2015 - 00:06:16

Fichier

Interspeech2015.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01196915, version 1

Citation

Vedran Vukotic, Christian Raymond, Guillaume Gravier. Is it time to switch to Word Embedding and Recurrent Neural Networks for Spoken Language Understanding?. InterSpeech, Sep 2015, Dresde, Germany. 〈hal-01196915〉

Partager

Métriques

Consultations de
la notice

733

Téléchargements du document

430