Hardness Results for Structured Learning and Inference with Multiple Correct Outputs

Abstract : In many domains of structured output prediction , multiple outputs can be considered correct. Several results exist showing that polynomial time computation both at training and test time is possible when a single correct output is present. In this work, we show that such guarantees do not hold when multiple outputs are correct. This is shown through three main results indicating that multiple correct outputs lead to NP-hard computation with existing convex sur-rogates for (i) learning with a supermodular loss function, (ii) learning with a submodular loss function, and (iii) test time inference with a diversity penalty term. These theoretical results highlight the importance of identifying sufficient conditions for tractable learning and inference with multiple correct outputs in practice.
Document type :
Conference papers
Complete list of metadatas

Cited literature [8 references]  Display  Hide  Download

https://hal.inria.fr/hal-01165337
Contributor : Matthew Blaschko <>
Submitted on : Friday, June 19, 2015 - 12:23:02 AM
Last modification on : Thursday, February 7, 2019 - 5:29:13 PM
Long-term archiving on : Tuesday, September 15, 2015 - 7:22:17 PM

File

Hardness.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01165337, version 1

Citation

Matthew Blaschko, Jiaqian Yu. Hardness Results for Structured Learning and Inference with Multiple Correct Outputs. Constructive Machine Learning Workshop at ICML, Jul 2015, Lille, France. ⟨hal-01165337⟩

Share

Metrics

Record views

215

Files downloads

467