Skip to Main content Skip to Navigation
Conference papers

Task Completion Transfer Learning for Reward Inference

Abstract : Reinforcement learning-based spoken dialogue systems aim to compute an optimal strategy for dialogue management from interactions with users. They compare their different management strategies on the basis of a numerical reward function. Reward inference consists of learning a reward function from dialogues scored by users. A major issue for reward inference algorithms is that important parameters influence user evaluations and cannot be computed online. This is the case of task completion. This paper introduces Task Completion Transfer Learning (TCTL): a method to exploit the exact knowledge of task completion on a corpus of dialogues scored by users in order to optimise online learning. Compared to previously proposed reward inference techniques, TCTL returns a reward function enhanced with the possibility to manage the online non-observability of task completion. A reward function is learnt with TCTL on dialogues with a restaurant seeking system. It is shown that the reward function returned by TCTL is a better estimator of dialogue performance than the one returned by reward inference.
Document type :
Conference papers
Complete list of metadata

https://hal.inria.fr/hal-01107500
Contributor : Olivier Pietquin <>
Submitted on : Tuesday, January 20, 2015 - 6:32:11 PM
Last modification on : Friday, April 2, 2021 - 3:36:16 AM

Identifiers

  • HAL Id : hal-01107500, version 1

Citation

Layla El Asri, Romain Laroche, Olivier Pietquin. Task Completion Transfer Learning for Reward Inference. International Workshop on Machine Learning for Interactive Systems (MLIS 2014), Jul 2014, Québec, Canada. ⟨hal-01107500⟩

Share

Metrics

Record views

231