A Probabilistic Model for Joint Learning of Word Embeddings from Texts and Images - Archive ouverte HAL Access content directly
Conference Papers Year :

A Probabilistic Model for Joint Learning of Word Embeddings from Texts and Images

(1, 2) , (1) , (2) , (2) , (1)
1
2

Abstract

Several recent studies have shown the benefits of combining language and perception to infer word embeddings. These multimodal approaches either simply combine pre-trained textual and visual representations (e.g. features extracted from convolutional neural networks), or use the latter to bias the learning of textual word embeddings. In this work, we propose a novel probabilistic model to formalize how linguistic and perceptual inputs can work in concert to explain the observed word-context pairs in a text corpus. Our approach learns textual and visual representations jointly: latent visual factors couple together a skip-gram model for co-occurrence in linguistic data and a generative latent variable model for visual data. Extensive experimental studies validate the proposed model. Concretely, on the tasks of assessing pairwise word similarity and image/caption retrieval, our approach attains equally competitive or stronger results when compared to other state-of-the-art multimodal models.
Fichier principal
Vignette du fichier
emnlp18.pdf (372.59 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01922985 , version 1 (14-11-2018)

Identifiers

  • HAL Id : hal-01922985 , version 1

Cite

Melissa Ailem, Bowen Zhang, Aurélien Bellet, Pascal Denis, Fei Sha. A Probabilistic Model for Joint Learning of Word Embeddings from Texts and Images. Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), 2018, Brussels, Belgium. ⟨hal-01922985⟩
103 View
175 Download

Share

Gmail Facebook Twitter LinkedIn More