Learning Submodular Losses with the Lovász Hinge

Abstract : Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surro-gates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel convex surrogate loss function for submodular losses, the Lovász hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. As a result, we have developed the first tractable convex surrogates in the literature for sub-modular losses. We demonstrate the utility of this novel convex surrogate through a real world image labeling task.
Type de document :
Communication dans un congrès
International Conference on Machine Learning, Jul 2015, Lille, France. 〈http://icml.cc/2015/〉
Liste complète des métadonnées

Littérature citée [23 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01151823
Contributeur : Matthew Blaschko <>
Soumis le : mercredi 13 mai 2015 - 16:16:47
Dernière modification le : vendredi 12 janvier 2018 - 11:15:15
Document(s) archivé(s) le : mercredi 19 avril 2017 - 23:40:18

Fichier

YU_ICML2015.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01151823, version 1

Citation

Jiaqian Yu, Matthew Blaschko. Learning Submodular Losses with the Lovász Hinge. International Conference on Machine Learning, Jul 2015, Lille, France. 〈http://icml.cc/2015/〉. 〈hal-01151823〉

Partager

Métriques

Consultations de la notice

218

Téléchargements de fichiers

645