Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Learnable pooling with Context Gating for video classification

Abstract : Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at modeling in-terdependencies between features. We evaluate the method on the multi-modal Youtube-8M Large-Scale Video Understanding dataset using pre-extracted visual and audio features. We demonstrate improvements provided by the Context Gating as well as by the combination of learnable pooling methods. We finally show how this leads to the best performance, out of more than 600 teams, in the Kaggle Youtube-8M Large-Scale Video Understanding challenge.
Complete list of metadata

Cited literature [44 references]  Display  Hide  Download
Contributor : Antoine Miech Connect in order to contact the contributor
Submitted on : Monday, June 26, 2017 - 4:10:55 PM
Last modification on : Wednesday, June 8, 2022 - 12:50:06 PM
Long-term archiving on: : Wednesday, January 17, 2018 - 5:20:26 PM


Files produced by the author(s)


  • HAL Id : hal-01547378, version 1
  • ARXIV : 1706.06905



Antoine Miech, Ivan Laptev, Josef Sivic. Learnable pooling with Context Gating for video classification. 2017. ⟨hal-01547378⟩



Record views


Files downloads