Low-power neural networks for semantic segmentation of satellite images

Abstract : Semantic segmentation methods have made impressive progress with deep learning. However, while achieving higher and higher accuracy, state-of-the-art neural networks overlook the complexity of architectures, which typically feature dozens of millions of trainable parameters. Consequently, these networks requires high computational ressources and are mostly not suited to perform on edge devices with tight resource constraints, such as phones, drones, or satellites. In this work, we propose two highly-compact neural network architectures for semantic segmen-tation of images, which are up to 100 000 times less complex than state-of-the-art architectures while approaching their accuracy. To decrease the complexity of existing networks, our main ideas consist in exploiting lightweight encoders and decoders with depth-wise separable convolutions and decreasing memory usage with the removal of skip connections between encoder and decoder. Our architectures are designed to be implemented on a basic FPGA such as the one featured on the Intel Altera Cyclone V family of SoCs. We demonstrate the potential of our solutions in the case of binary segmentation of remote sensing images, in particular for extracting clouds and trees from RGB satellite images.
Complete list of metadatas

Cited literature [44 references]  Display  Hide  Download

Contributor : Florent Lafarge <>
Submitted on : Tuesday, September 3, 2019 - 1:09:01 PM
Last modification on : Wednesday, September 4, 2019 - 1:21:35 AM


Files produced by the author(s)


  • HAL Id : hal-02277061, version 1


Gaétan Bahl, Lionel Daniel, Matthieu Moretti, Florent Lafarge. Low-power neural networks for semantic segmentation of satellite images. ICCV Workshop on Low-Power Computer Vision, 2019, Seoul, South Korea. ⟨hal-02277061⟩



Record views


Files downloads