Skip to Main content Skip to Navigation
New interface
Preprints, Working Papers, ...

Deep Saliency Models : the Quest for the Loss Function

Abstract : Recent advances in deep learning have pushed the performances of visual saliency models way further than it has ever been. Numerous models in the literature present new ways to design neural networks, to arrange gaze pattern data, or to extract as much high and low-level image features as possible in order to create the best saliency representation. However, one key part of a typical deep learning model is often neglected: the choice of the loss function. In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also introduce new loss functions that have never been used for saliency prediction to our knowledge. And finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performances on different datasets as well as on a different network architecture, hence demonstrating the robustness of a combined metric.
Complete list of metadata

Cited literature [45 references]  Display  Hide  Download
Contributor : Alexandre Bruckert Connect in order to contact the contributor
Submitted on : Wednesday, August 7, 2019 - 5:47:37 PM
Last modification on : Friday, August 5, 2022 - 2:54:52 PM
Long-term archiving on: : Thursday, January 9, 2020 - 1:52:01 AM


Files produced by the author(s)


  • HAL Id : hal-02264898, version 1


Alexandre Bruckert, Hamed R Tavakoli, Zhi Liu, Marc Christie, Olivier Le Meur. Deep Saliency Models : the Quest for the Loss Function. 2019. ⟨hal-02264898⟩



Record views


Files downloads