GuessWhat?! Visual object discovery through multi-modal dialogue

Abstract : We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial base-lines of the introduced tasks.
Complete list of metadatas

Cited literature [45 references]  Display  Hide  Download

https://hal.inria.fr/hal-01549641
Contributor : Florian Strub <>
Submitted on : Wednesday, June 28, 2017 - 11:35:00 PM
Last modification on : Friday, March 22, 2019 - 1:34:19 AM
Long-term archiving on : Thursday, January 18, 2018 - 2:54:39 AM

File

1611.08481.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution 4.0 International License

Identifiers

  • HAL Id : hal-01549641, version 1
  • ARXIV : 1611.08481

Citation

Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, et al.. GuessWhat?! Visual object discovery through multi-modal dialogue. Conference on Computer Vision and Pattern Recognition, Jul 2017, Honolulu, United States. ⟨hal-01549641⟩

Share

Metrics

Record views

423

Files downloads

290