Learnable pooling with Context Gating for video classification

Abstract : Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at modeling in-terdependencies between features. We evaluate the method on the multi-modal Youtube-8M Large-Scale Video Understanding dataset using pre-extracted visual and audio features. We demonstrate improvements provided by the Context Gating as well as by the combination of learnable pooling methods. We finally show how this leads to the best performance, out of more than 600 teams, in the Kaggle Youtube-8M Large-Scale Video Understanding challenge.
Liste complète des métadonnées

Littérature citée [44 références]  Voir  Masquer  Télécharger

Contributeur : Antoine Miech <>
Soumis le : lundi 26 juin 2017 - 16:10:55
Dernière modification le : lundi 28 janvier 2019 - 09:03:55
Document(s) archivé(s) le : mercredi 17 janvier 2018 - 17:20:26


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01547378, version 1
  • ARXIV : 1706.06905



Antoine Miech, Ivan Laptev, Josef Sivic. Learnable pooling with Context Gating for video classification. 2017. 〈hal-01547378〉



Consultations de la notice


Téléchargements de fichiers