GuessWhat?! Visual object discovery through multi-modal dialogue - Archive ouverte HAL Access content directly
Conference Papers Year :

GuessWhat?! Visual object discovery through multi-modal dialogue

(1) , (2, 3) , (1) , (2, 3) , (4) , (1)
1
2
3
4

Abstract

We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial base-lines of the introduced tasks.
Fichier principal
Vignette du fichier
1611.08481.pdf (18.94 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01549641 , version 1 (28-06-2017)

Licence

Attribution - CC BY 4.0

Identifiers

Cite

Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, et al.. GuessWhat?! Visual object discovery through multi-modal dialogue. Conference on Computer Vision and Pattern Recognition, Jul 2017, Honolulu, United States. ⟨hal-01549641⟩
342 View
148 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More