Skip to Main content Skip to Navigation
New interface
Conference papers

How Do We Evaluate the Quality of Computational Editing Systems?

Christophe Lino 1, * Rémi Ronfard 1 Quentin Galvane 1 Michael Gleicher 1, 2 
* Corresponding author
1 IMAGINE - Intuitive Modeling and Animation for Interactive Graphics & Narrative Environments
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology
Abstract : One problem common to all researchers in the field of virtual cinematography and editing is to be able to assess the quality of the output of their systems. There is a pressing requirement for appropriate evaluations of proposed models and techniques. Indeed, though papers are often accompanied with example videos, showing subjective results and occasionally providing qualitative comparisons with other methods or with human-created movies, they generally lack an extensive evaluation. The goal of this paper is to survey evaluation methodologies that have been used in the past and to review a range of other interesting methodologies as well as a number of questions related to how we could better evaluate and compare future systems.
Complete list of metadata

Cited literature [18 references]  Display  Hide  Download
Contributor : Christophe Lino Connect in order to contact the contributor
Submitted on : Tuesday, May 20, 2014 - 11:57:29 PM
Last modification on : Saturday, November 19, 2022 - 3:58:53 AM
Long-term archiving on: : Wednesday, August 20, 2014 - 12:26:35 PM


Files produced by the author(s)


  • HAL Id : hal-00994106, version 1


Christophe Lino, Rémi Ronfard, Quentin Galvane, Michael Gleicher. How Do We Evaluate the Quality of Computational Editing Systems?. AAAI Workshop on Intelligent Cinematography And Editing, Jul 2014, Québec, Canada. pp.35-39. ⟨hal-00994106⟩



Record views


Files downloads