J. Edlund, J. Gustafson, M. Heldner, and A. Hjalmarsson, Towards human-like spoken dialogue systems, Speech Communication, vol.50, issue.8-9, pp.630-645, 2008.
DOI : 10.1016/j.specom.2008.04.002

URL : https://hal.archives-ouvertes.fr/hal-00499214

R. Rosenfeld and T. Harris, A Universal Speech Interface for Appliances, Comput. Sci. Dep, 2004.

J. L. Austin, How to do things with words, 1975.
DOI : 10.1093/acprof:oso/9780198245537.001.0001

J. R. Searle, What Is an Intentional State?, Mind, vol.LXXXVIII, issue.1, pp.74-92, 1979.
DOI : 10.1093/mind/LXXXVIII.1.74

M. G. Core and J. Allen, Coding dialogs with the DAMSL annotation scheme », AAAI fall symposium on communicative action in humans and machines, 1997.

E. A. Schegloff and H. Sacks, Opening up Closings, Semiotica, vol.8, issue.4, pp.289-327, 1973.
DOI : 10.1515/semi.1973.8.4.289

G. Aist, Incremental dialogue system faster than and preferred to its nonincremental counterpart, Proceedings of the Annual Meeting of the Cognitive Science Society, 2007.

R. López-cózar, Z. Callejas, D. Griol, and J. F. Quesada, Review of spoken dialogue systems, Loquens, vol.1, issue.2, p.12, 2014.
DOI : 10.3989/loquens.2014.012

M. Paetzel, R. Manuvinakurike, and D. Devault, "So, which one is it?" The effect of alternative incremental architectures in a high-performance game-playing agent, Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, p.77, 2015.
DOI : 10.18653/v1/W15-4610

D. Schlangen and G. Skantze, A general, abstract model of incremental dialogue processing, Proceedings of the 12th Conference of the European Chapter, pp.710-718, 2009.

T. Baumann, M. Atterer, and D. Schlangen, Assessing and improving the performance of speech recognition for incremental systems, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics on, NAACL '09, pp.380-388, 2009.
DOI : 10.3115/1620754.1620810

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, IEEE Transactions on Neural Networks, vol.9, issue.5, 1998.
DOI : 10.1109/TNN.1998.712192

O. Lemon and O. Pietquin, Machine learning for spoken dialogue systems, European Conference on Speech Communication and Technologies, pp.2685-2688, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00216035

J. Schatzmann, K. Weilhammer, M. Stuttle, and S. Young, A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies, The Knowledge Engineering Review, vol.21, issue.02, pp.97-126, 2006.
DOI : 10.1017/S0269888906000944

O. Pietquin and H. Hastie, A survey on metrics for the evaluation of user simulations, The Knowledge Engineering Review, vol.11, issue.01, pp.59-73, 2013.
DOI : 10.1016/j.csl.2009.03.002

URL : https://hal.archives-ouvertes.fr/hal-00771654

E. Levin and R. Pieraccini, A stochastic model of computer-human interaction for learning dialogue strategies, Eurospeech, vol.97, pp.1883-1886, 1997.

N. Audibert, V. Aubergé, and A. Rilliard, Prosodic Correlates of Acted vs. Spontaneous Discrimination of Expressive Speech: A Pilot Study, Proc. 5th International Conference on Speech Prosody, 2010.

A. Vanpé and V. Aubergé, Early meaning before the phonemes concatenation? Prosodic cues for Feeling of Thinking, p.2012

Y. Sasa, V. Aubergé, and A. Rilliard, Social micro-expressions within Japanese-French contrast, WACAI 2012 Workshop Affect, 2013.

F. Ameka, Interjections: The universal yet neglected part of speech, Journal of Pragmatics, vol.18, issue.2-3, pp.101-118, 1992.
DOI : 10.1016/0378-2166(92)90048-G

. Poggi, The Language of Interjections, Multimodal Signals: Cognitive and Algorithmic Issues, pp.170-186, 2009.
DOI : 10.1038/scientificamerican0960-88

M. Schröder and D. K. , Heylen, and I. Poggi, Perception of non-verbal emotional listener feedback, Speech Prosody, 2006.

N. Audibert, V. Aubergé, and A. Rilliard, Acted vs. spontaneous expressive speech: perception with inter-individual variability, Proc. 2nd International Workshop on Corpora for Research on Emotion and Affect, 2008.
URL : https://hal.archives-ouvertes.fr/halshs-01419146

B. Schuller and A. Batliner, Computational paralinguistics: emotion, affect and personality in speech and language processing, 2013.
DOI : 10.1002/9781118706664

Y. Sagisaka, N. Campbell, and N. Higuchi, Computing prosody: computational models for processing spontaneous speech, 2012.
DOI : 10.1007/978-1-4612-2258-3

L. Morency, Modeling Human Communication Dynamics [Social Sciences, IEEE Signal Processing Magazine, vol.27, issue.5, pp.112-116, 2010.
DOI : 10.1109/MSP.2010.937500

V. Aubergé, Y. Sasa, T. Robert, N. Bonnefond, and E. B. Meillon, Emoz: a wizard of Oz for emerging the socio-affective glue with a non humanoid companion robot, 2013.

R. Signorello, V. Aubergé, A. Vanpé, L. Granjon, and N. Audibert, À la recherche d'indices de culture et/ou de langue dans les micro-événements audio-visuels de l'interaction face à face, pp.69-76, 2010.

G. D. Biasi, V. Auberge, and L. Granjon, Perception of social affects from non lexical sounds, GSCP
URL : https://hal.archives-ouvertes.fr/hal-00959138

V. Auberge, Attitude vs. emotion: a question of voluntary vs. involuntary control, keynote talk, GSCP
URL : https://hal.archives-ouvertes.fr/hal-00959137

Y. Sasa and V. Aubergé, Socio-affective interactions between a companion robot and elderly in a Smart Home context: prosody as the main vector of the " socio-affective glue, 2014.
URL : https://hal.archives-ouvertes.fr/hal-00953723

V. Aubergé, The EEE corpus: socio-affective " glue " cues in elderly-robot interactions in a Smart Home with the Emoz platform, 5th International Workshop on Emotion, Social Signals, p.2014

D. Povey, The Kaldi speech recognition toolkit. Workshop on automatic speech recognition and understanding, 2011.

V. Aubergé, N. Ghneim, and R. Belrhali, Analyse du corpus ORTHOTEL : apport du traitement automatique ?? la classification des d??viations orthographiques, Langue fran??aise, vol.124, issue.1, pp.90-103, 1999.
DOI : 10.3406/lfr.1999.6308