Deep Saliency Models : the Quest for the Loss Function

Abstract : Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
Complete list of metadatas

Cited literature [45 references]  Display  Hide  Download

https://hal.inria.fr/hal-02264898
Contributor : Alexandre Bruckert <>
Submitted on : Wednesday, August 7, 2019 - 5:47:37 PM
Last modification on : Monday, September 16, 2019 - 3:56:44 PM

File

the_quest_for_the_loss_functio...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02264898, version 1

Citation

Alexandre Bruckert, Hamed Tavakoli, Zhi Liu, Marc Christie, Olivier Le Meur. Deep Saliency Models : the Quest for the Loss Function. 2019. ⟨hal-02264898⟩

Share

Metrics

Record views

29

Files downloads

675