Skip to Main content Skip to Navigation
Conference papers

Multi-modal Transformer for Video Retrieval

Valentin Gabeur 1, 2 Chen Sun 2 Karteek Alahari 1 Cordelia Schmid 2
1 Thoth - Apprentissage de modèles à partir de données massives
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann
Abstract : The task of retrieving video content relevant to natural language queries plays a critical role in effectively handling internet-scale datasets. Most of the existing methods for this caption-to-video retrieval problem do not fully exploit cross-modal cues present in video. Furthermore, they aggregate per-frame visual features with limited or no temporal information. In this paper, we present a multi-modal transformer to jointly encode the different modalities in video, which allows each of them to attend to the others. The transformer architecture is also leveraged to encode and model the temporal information. On the natural language side, we investigate the best practices to jointly optimize the language embedding together with the multi-modal transformer. This novel framework allows us to establish state-of-the-art results for video retrieval on three datasets. More details are available at http://thoth.inrialpes.fr/research/MMT.
Document type :
Conference papers
Complete list of metadata

Cited literature [35 references]  Display  Hide  Download

https://hal.inria.fr/hal-02903209
Contributor : Thoth Team <>
Submitted on : Monday, July 20, 2020 - 5:55:19 PM
Last modification on : Monday, September 20, 2021 - 3:02:06 PM
Long-term archiving on: : Tuesday, December 1, 2020 - 2:01:15 AM

File

main.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Valentin Gabeur, Chen Sun, Karteek Alahari, Cordelia Schmid. Multi-modal Transformer for Video Retrieval. ECCV 2020 - European Conference on Computer Vision, Aug 2020, Glasgow, United Kingdom. pp.214-229, ⟨10.1007/978-3-030-58548-8_13⟩. ⟨hal-02903209⟩

Share

Metrics

Record views

385

Files downloads

862