Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Variational Structured Attention Networks for Deep Visual Representation Learning

Abstract : Convolutional neural networks have enabled major progress in addressing pixel-level prediction tasks such as semantic segmentation, depth estimation, surface normal prediction, and so on, benefiting from their powerful capabilities in visual representation learning. Typically, state-of-the-art models integrates attention mechanisms for improved deep feature representations. Recently, some works have demonstrated the significance of learning and combining both spatial- and channel-wise attentions for deep feature refinement. In this paper, we aim at effectively boosting previous approaches and propose a unified deep framework to jointly learn both spatial attention maps and channel attention vectors in a principled manner so as to structure the resulting attention tensors and model interactions between these two types of attentions. Specifically, we integrate the estimation and the interaction of the attentions within a probabilistic representation learning framework, leading to Variational STructured Attention networks (VISTA-Net). We implement the inference rules within the neural network, thus allowing for end-to-end learning of the probabilistic and the CNN front-end parameters. As demonstrated by our extensive empirical evaluation on six large-scale datasets for dense visual prediction, VISTA-Net outperforms the state-of-the-art in multiple continuous and discrete prediction tasks, thus confirming the benefit of the proposed approach in joint structured spatial-channel attention estimation for deep representation learning. The code is available at https://github.com/ygjwd12345/VISTA-Net.
Complete list of metadata

https://hal.inria.fr/hal-03296152
Contributor : Xavier Alameda-Pineda Connect in order to contact the contributor
Submitted on : Thursday, July 22, 2021 - 3:58:44 PM
Last modification on : Wednesday, May 4, 2022 - 12:00:02 PM

Links full text

Identifiers

  • HAL Id : hal-03296152, version 1
  • ARXIV : 2103.03510

Collections

Citation

Guanglei Yang, Paolo Rota, Xavier Alameda-Pineda, Dan Xu, Mingli Ding, et al.. Variational Structured Attention Networks for Deep Visual Representation Learning. 2021. ⟨hal-03296152⟩

Share

Metrics

Record views

29