An integrative platform to capture the orchestration of gesture and speech

Abstract : A number of studies have highlighted the coordination of gesture and intonation (Bolinger, 1983; Darwin, 1872; Kendon, 1980) but the technological setups have been insufficient to couple the acoustic and gestural data with sufficient detail. In this paper, we present the MODALISA platform which enables language specialists to integrate gesture, intonation, speech production and content. The methods of data acquisition, annotation and analysis are detailed. The preliminary results of our pilot study illustrate strong correlations between gestures and intonation when they are simultaneously performed by the speaker. The correlations are particularly strong for proximal segments. Our aim is to expand those results and analyse typical and atypical populations across the lifespan.
Complete list of metadatas

Cited literature [18 references]  Display  Hide  Download
Contributor : Slim Ouni <>
Submitted on : Wednesday, September 4, 2019 - 12:29:53 PM
Last modification on : Saturday, September 7, 2019 - 1:15:36 AM


Files produced by the author(s)


  • HAL Id : hal-02278345, version 1


Christelle Dodane, Dominique Boutet, Ivana Didirkova, Hirsch Fabrice, Slim Ouni, et al.. An integrative platform to capture the orchestration of gesture and speech. Gesture and Speech in Interaction - GeSpIn 2019, Sep 2019, Paderborn, Germany. ⟨hal-02278345⟩



Record views


Files downloads