, References

G. Tur, D. Hakkani-tur, and L. Heck, What is left to be understood in ATIS?, 2010 IEEE Spoken Language Technology Workshop, pp.19-24, 2010.
DOI : 10.1109/SLT.2010.5700816

P. J. Price, Evaluation of spoken language systems, Proceedings of the workshop on Speech and Natural Language , HLT '90, 1990.
DOI : 10.3115/116580.116612

L. Hirschman, Multi-site data collection for a spoken language corpus, Proceedings of the workshop on Speech and Natural Language , HLT '91, pp.7-14, 1992.
DOI : 10.3115/1075527.1075531

C. Raymond and G. Riccardi, Generative and discriminative algorithms for spoken language understanding, pp.1605-1608, 2007.

V. Vukotic, C. Raymond, and G. Gravier, Is it time to switch to word embedding and recurrent neural networks for spoken language understanding, InterSpeech, 2015.
URL : https://hal.archives-ouvertes.fr/hal-01196915

M. Dinarelli, V. Vukotic, and C. Raymond, Label-dependency coding in Simple Recurrent Networks for Spoken Language Understanding Available: https, Interspeech, 2017.

G. Mesnil, X. He, L. Deng, and Y. Bengio, Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding, INTERSPEECH 2013 14th Annual Conference of the International Speech Communication Association, pp.3771-3775, 2013.

G. Mesnil, Y. Dauphin, K. Yao, Y. Bengio, L. Deng et al., Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.23, issue.3, pp.530-539, 2015.
DOI : 10.1109/TASLP.2014.2383614

A. Laurent, N. Camelin, and C. Raymond, Boosting bonsai trees for efficient features combination : application to speaker role identification, 2014.
URL : https://hal.archives-ouvertes.fr/hal-01025171

F. Chollet, Keras, " https://github.com/keras-team/keras, 2015.

T. Lavergne, O. Cappé, and F. Yvon, Practical very large scale CRFs, Proceedings the 48th Annual Meeting of the Association for Computational Linguistics (ACL). Association for Computational Linguistics, pp.504-513, 2010.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout: A simple way to prevent neural networks from overfitting, J. Mach. Learn. Res, vol.15, issue.1, pp.1929-1958, 2014.

K. Yao, B. Peng, Y. Zhang, D. Yu, G. Zweig et al., Spoken language understanding using long short-term memory neural networks, " in SLT, pp.189-194, 2014.

B. Liu and I. Lane, Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling, Interspeech 2016, 2016.
DOI : 10.21437/Interspeech.2016-1352

X. Zhang and H. Wang, A joint model of intent determination and slot filling for spoken language understanding, IJCAI, pp.2993-2999, 2016.