Skip to Main content Skip to Navigation
New interface
Conference papers

DeepComics: saliency estimation for comics

Abstract : A key requirement for training deep learning saliency models is large training eye tracking datasets. Despite the fact that the accessibility of eye tracking technology has greatly increased , collecting eye tracking data on a large scale for very specific content types is cumbersome, such as comic images, which are different from natural images such as photographs because text and pictorial content is integrated. In this paper, we show that a deep network trained on visual categories where the gaze deployment is similar to comics outperforms existing models and models trained with visual categories for which the gaze deployment is dramatically different from comics. Further, we find that it is better to use a computa-tionally generated dataset on visual category close to comics one than real eye tracking data of a visual category that has different gaze deployment. These findings hold implications for the transference of deep networks to different domains.
Document type :
Conference papers
Complete list of metadata

Cited literature [35 references]  Display  Hide  Download
Contributor : Olivier Le Meur Connect in order to contact the contributor
Submitted on : Tuesday, December 11, 2018 - 4:55:04 PM
Last modification on : Friday, August 5, 2022 - 2:54:52 PM
Long-term archiving on: : Tuesday, March 12, 2019 - 2:47:12 PM



Kévin Bannier, Eakta Jain, Olivier Le Meur. DeepComics: saliency estimation for comics. ETRA 2018 - ACM Symposium on Eye Tracking Research & Applications, Jun 2018, Warsaw, Poland. pp.1-5, ⟨10.1145/3204493.3204560⟩. ⟨hal-01951413⟩



Record views


Files downloads