Learning Visual Reasoning Without Strong Priors

Abstract : Achieving artificial visual reasoning — the ability to answer image-related questions which require a multi-step, high-level process — is an important step towards artificial general intelligence. This multi-modal task requires learning a question-dependent, structured reasoning process over images from language. Standard deep learning approaches tend to exploit biases in the data rather than learn this underlying structure, while leading methods learn to visually reason successfully but are hand-crafted for reasoning. We show that a general-purpose, Conditional Batch Normalization approach achieves state-of-the-art results on the CLEVR Visual Reasoning benchmark with a 2.4% error rate. We outperform the next best end-to-end method (4.5%) and even methods that use extra supervision (3.1%). We probe our model to shed light on how it reasons, showing it has learned a question-dependent, multi-step process. Previous work has operated under the assumption that visual reasoning calls for a specialized architecture, but we show that a general architecture with proper conditioning can learn to visually reason effectively. Index Terms: Deep Learning, Language and Vision Note: A full paper extending this study is available at http: //arxiv.org/abs/1709.07871, with additional references , experiments, and analysis.
Type de document :
Communication dans un congrès
ICML 2017's Machine Learning in Speech and Language Processing Workshop, Aug 2017, Sidney, France
Liste complète des métadonnées

Littérature citée [33 références]  Voir  Masquer  Télécharger

Contributeur : Florian Strub <>
Soumis le : mardi 28 novembre 2017 - 03:13:35
Dernière modification le : mardi 3 juillet 2018 - 11:34:57

Lien texte intégral


  • HAL Id : hal-01648684, version 1
  • ARXIV : 1709.07871


Ethan Perez, Harm De Vries, Florian Strub, Vincent Dumoulin, Aaron Courville. Learning Visual Reasoning Without Strong Priors. ICML 2017's Machine Learning in Speech and Language Processing Workshop, Aug 2017, Sidney, France. 〈hal-01648684〉



Consultations de la notice