Predicting Deeper into the Future of Semantic Segmentation

Abstract : The ability to predict and therefore to anticipate the future is an important attribute of intelligence. It is also of utmost importance in real-time systems, e.g. in robotics or autonomous driving, which depend on visual scene understanding for decision making. While prediction of the raw RGB pixel values in future video frames has been studied in previous work, here we focus on predicting semantic segmentations of future frames. More precisely, given a sequence of semantically segmented video frames, our goal is to predict segmentation maps of not yet observed video frames that lie up to a second or further in the future. We develop an autoregressive convolutional neural network that learns to iteratively generate multiple frames. Our results on the Cityscapes dataset show that directly predicting future segmentations is substantially better than predicting and then segmenting future RGB frames. Our models predict trajectories of cars and pedestrians much more accurately (25%) than baselines that copy the most recent semantic segmentation or warp it using optical flow. Prediction results up to half a second in the future are visually convincing, the mean IoU of predicted segmentations reaching two thirds of the real future segmentations.
Type de document :
Pré-publication, Document de travail
2017
Liste complète des métadonnées


https://hal.inria.fr/hal-01494296
Contributeur : Thoth Team <>
Soumis le : jeudi 23 mars 2017 - 10:37:42
Dernière modification le : mercredi 29 mars 2017 - 16:30:36

Fichier

NevLuc2017iccv_arxiv (1).pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01494296, version 1

Collections

Citation

Natalia Neverova, Pauline Luc, Camille Couprie, Jakob Verbeek, Yann Lecun. Predicting Deeper into the Future of Semantic Segmentation. 2017. <hal-01494296>

Partager

Métriques

Consultations de
la notice

464

Téléchargements du document

247