Is it time to switch to Word Embedding and Recurrent Neural Networks for Spoken Language Understanding? - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

Is it time to switch to Word Embedding and Recurrent Neural Networks for Spoken Language Understanding?

Résumé

Recently, word embedding representations have been investigated for slot filling in Spoken Language Understanding, along with the use of Neural Networks as classifiers. Neural Networks , especially Recurrent Neural Networks, that are specifically adapted to sequence labeling problems, have been applied successfully on the popular ATIS database. In this work, we make a comparison of this kind of models with the previously state-of-the-art Conditional Random Fields (CRF) classifier on a more challenging SLU database. We show that, despite efficient word representations used within these Neural Networks, their ability to process sequences is still significantly lower than for CRF, while also having a drawback of higher computational costs, and that the ability of CRF to model output label dependencies is crucial for SLU.
Fichier principal
Vignette du fichier
Interspeech2015.pdf (155.05 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01196915 , version 1 (10-09-2015)

Identifiants

  • HAL Id : hal-01196915 , version 1

Citer

Vedran Vukotić, Christian Raymond, Guillaume Gravier. Is it time to switch to Word Embedding and Recurrent Neural Networks for Spoken Language Understanding?. InterSpeech, Sep 2015, Dresde, Germany. ⟨hal-01196915⟩
571 Consultations
776 Téléchargements

Partager

Gmail Facebook X LinkedIn More