How Do We Evaluate the Quality of Computational Editing Systems?

Christophe Lino 1, * Rémi Ronfard 1 Quentin Galvane 1 Michael Gleicher 1, 2
* Corresponding author
1 IMAGINE - Intuitive Modeling and Animation for Interactive Graphics & Narrative Environments
Inria Grenoble - Rhône-Alpes, LJK - Laboratoire Jean Kuntzmann, INPG - Institut National Polytechnique de Grenoble
Abstract : One problem common to all researchers in the field of virtual cinematography and editing is to be able to assess the quality of the output of their systems. There is a pressing requirement for appropriate evaluations of proposed models and techniques. Indeed, though papers are often accompanied with example videos, showing subjective results and occasionally providing qualitative comparisons with other methods or with human-created movies, they generally lack an extensive evaluation. The goal of this paper is to survey evaluation methodologies that have been used in the past and to review a range of other interesting methodologies as well as a number of questions related to how we could better evaluate and compare future systems.
Complete list of metadatas

Cited literature [18 references]  Display  Hide  Download

https://hal.inria.fr/hal-00994106
Contributor : Christophe Lino <>
Submitted on : Tuesday, May 20, 2014 - 11:57:29 PM
Last modification on : Thursday, April 4, 2019 - 11:30:03 AM
Long-term archiving on : Wednesday, August 20, 2014 - 12:26:35 PM

File

Lino_etal_-_How_Do_We_Evaluate...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00994106, version 1

Collections

Citation

Christophe Lino, Rémi Ronfard, Quentin Galvane, Michael Gleicher. How Do We Evaluate the Quality of Computational Editing Systems?. AAAI Workshop on Intelligent Cinematography And Editing, Jul 2014, Québec, Canada. pp.35-39. ⟨hal-00994106⟩

Share

Metrics

Record views

617

Files downloads

407