Skip to Main content Skip to Navigation
Reports

A Comparative Study of Neural Network Compression

Hossein Baktash 1, 2 Emanuele Natale 3 Laurent Viennot 4
3 COATI - Combinatorics, Optimization and Algorithms for Telecommunications
CRISAM - Inria Sophia Antipolis - Méditerranée , Laboratoire I3S - COMRED - COMmunications, Réseaux, systèmes Embarqués et Distribués
4 GANG - Networks, Graphs and Algorithms
IRIF (UMR_8243) - Institut de Recherche en Informatique Fondamentale, Inria de Paris
Abstract : There has recently been an increasing desire to evaluate neural networks locally on computationally-limited devices in order to exploit their recent effectiveness for several applications; such effectiveness has nevertheless come together with a considerable increase in the size of modern neural networks, which constitute a major downside in several of the aforementioned computationally-limited settings. There has thus been a demand of compression techniques for neural networks. Several proposal in this direction have been made, which famously include hashing-based methods and pruning-based ones. However, the evaluation of the efficacy of these techniques has so far been heterogeneous, with no clear evidence in favor of any of them over the others. The goal of this work is to address this latter issue by providing a comparative study. While most previous studies test the capability of a technique in reducing the number of parameters of state-of-the-art networks , we follow [CWT + 15] in evaluating their performance on basic ar-chitectures on the MNIST dataset and variants of it, which allows for a clearer analysis of some aspects of their behavior. To the best of our knowledge, we are the first to directly compare famous approaches such as HashedNet, Optimal Brain Damage (OBD), and magnitude-based pruning with L1 and L2 regularization among them and against equivalent-size feed-forward neural networks with simple (fully-connected) and structural (convolutional) neural networks. Rather surprisingly, our experiments show that (iterative) pruning-based methods are substantially better than the HashedNet architecture, whose compression doesn't appear advantageous to a carefully chosen convolutional network. We also show that, when the compression level is high, the famous OBD pruning heuristics deteriorates to the point of being less efficient than simple magnitude-based techniques.
Document type :
Reports
Complete list of metadatas

https://hal.inria.fr/hal-02321581
Contributor : Hossein Baktash <>
Submitted on : Monday, October 21, 2019 - 12:48:11 PM
Last modification on : Monday, October 12, 2020 - 10:30:40 AM
Long-term archiving on: : Wednesday, January 22, 2020 - 2:42:50 PM

Files

A_Comparative_Study_of_Neural_...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02321581, version 1
  • ARXIV : 1910.11144

Citation

Hossein Baktash, Emanuele Natale, Laurent Viennot. A Comparative Study of Neural Network Compression. [Research Report] INRIA Sophia Antipolis - I3S. 2019. ⟨hal-02321581⟩

Share

Metrics

Record views

122

Files downloads

322