One-Step Time-Dependent Future Video Frame Prediction with a Convolutional Encoder-Decoder Neural Network

Abstract : There is an inherent need for machines to have a notion of how entities within their environment behave and to anticipate changes in the near future. In this work, we focus on anticipating future appearance, given the current frame of a video. Typical methods are used either to predict the next frame of a video or to predict future optical flow or trajectories based on a single video frame. This work presents an experiment on stretching the ability of CNNs to anticipate appearance at an arbitrarily given near future time, by conditioning our predicted video frames on a continuous time variable. We show that CNNs can learn an intrinsic representation of typical appearance changes over time and successfully generate realistic predictions in one step-at a deliberate time difference in the near future. The method is evaluated on the KTH human actions dataset and compared to a baseline consisting of an analogous CNN architecture that is not time-aware.
Type de document :
Communication dans un congrès
Netherlands Conference on Computer Vision (NCCV), Dec 2016, Lunteren, Netherlands. 2016
Liste complète des métadonnées

Littérature citée [27 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01467064
Contributeur : Vedran Vukotić <>
Soumis le : mardi 14 février 2017 - 09:33:14
Dernière modification le : mardi 12 septembre 2017 - 09:46:47
Document(s) archivé(s) le : lundi 15 mai 2017 - 12:33:26

Fichier

Vukotic_NCCV_2016.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01467064, version 1

Citation

Vedran Vukotić, Silvia-Laura Pintea, Christian Raymond, Guillaume Gravier, Jan Van Gemert. One-Step Time-Dependent Future Video Frame Prediction with a Convolutional Encoder-Decoder Neural Network. Netherlands Conference on Computer Vision (NCCV), Dec 2016, Lunteren, Netherlands. 2016. 〈hal-01467064〉

Partager

Métriques

Consultations de
la notice

558

Téléchargements du document

101