Visual Reasoning with Multi-hop Feature Modulation

Abstract : Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks such as visual question-answering and visual dialogue. For such tasks, one successful approach is to condition image-based convolutional network computation on language via Feature-wise Linear Modulation (FiLM) layers, i.e., per-channel scaling and shifting. We propose to generate the parameters of FiLM layers going up the hierarchy of a convolutional network in a multi-hop fashion rather than all at once, as in prior work. By alternating between attending to the language input and generating FiLM layer parameters, this approach is better able to scale to settings with longer input sequences such as dialogue. We demonstrate that multi-hop FiLM generation achieves state-of-the-art for the short input sequence task ReferIt-on-par with single-hop FiLM generation-while also significantly outperforming prior state-of-the-art and single-hop FiLM generation on the GuessWhat?! visual dialogue task.
Complete list of metadatas

Cited literature [16 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01927811
Contributor : Mathieu Seurin <>
Submitted on : Tuesday, November 20, 2018 - 10:30:57 AM
Last modification on : Wednesday, April 17, 2019 - 12:21:21 PM

File

1808.04446.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01927811, version 1
  • ARXIV : 1808.04446

Citation

Florian Strub, Mathieu Seurin, Ethan Perez, Harm de Vries, Jérémie Mary, et al.. Visual Reasoning with Multi-hop Feature Modulation. ECCV 2018 - 15th European Conference on Computer Vision, Sep 2018, Munich, Germany. pp.808-831. ⟨hal-01927811⟩

Share

Metrics

Record views

71

Files downloads

84