Learning Dialogue Dynamics with the Method of Moments

Abstract : In this paper, we introduce a novel framework to encode the dynamics of dialogues into a probabilistic graphical model. Traditionally, Hidden Markov Models (HMMs) would be used to address this problem, involving a first step of hand-crafting to build a dialogue model (e.g. defining potential hidden states) followed by applying expectation-maximisation (EM) algorithms to refine it. Recently, an alternative class of algorithms based on the Method of Moments (MoM) has proven successful in avoiding issues of the EM-like algorithms such as convergence towards local optima, tractability issues, initialization issues or the lack of theoretical guarantees. In this work, we show that dialogues may be modeled by SP-RFA, a class of graphical models efficiently learnable within the MoM and directly usable in planning algorithms (such as reinforcement learning). Experiments are led on the Ubuntu corpus and dialogues are considered as sequences of dialogue acts, represented by their Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA). We show that a MoM-based algorithm can learn a compact model of sequences of such acts.
Complete list of metadatas

Cited literature [37 references]  Display  Hide  Download

https://hal.inria.fr/hal-01406904
Contributor : Olivier Pietquin <>
Submitted on : Thursday, December 1, 2016 - 4:39:51 PM
Last modification on : Thursday, April 4, 2019 - 10:18:05 AM
Long-term archiving on : Tuesday, March 21, 2017 - 3:32:44 AM

File

SLT_2016_MBRLOP.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01406904, version 1

Citation

Merwan Barlier, Romain Laroche, Olivier Pietquin. Learning Dialogue Dynamics with the Method of Moments. Workshop on Spoken Language Technologie (SLT 2016), Dec 2016, San Diego, United States. ⟨hal-01406904⟩

Share

Metrics

Record views

311

Files downloads

181