Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Training with Quantization Noise for Extreme Model Compression

Angela Fan 1, 2 Pierre Stock 1, 3, 4 Benjamin Graham 1 Edouard Grave 1 Rémi Gribonval 4, 3 Herve Jegou 1 Armand Joulin 1
3 DANTE - Dynamic Networks : Temporal and Structural Capture Approach
Inria Grenoble - Rhône-Alpes, LIP - Laboratoire de l'Informatique du Parallélisme, IXXI - Institut Rhône-Alpin des systèmes complexes
4 PANAMA - Parcimonie et Nouveaux Algorithmes pour le Signal et la Modélisation Audio
Abstract : We tackle the problem of producing compact models, maximizing their accuracy for a given model size. A standard solution is to train networks with Quantization Aware Training, where the weights are quantized during training and the gradients approximated with the Straight-Through Estimator. In this paper, we extend this approach to work beyond int8 fixed-point quantization with extreme compression methods where the approximations introduced by STE are severe, such as Product Quantization. Our proposal is to only quantize a different random subset of weights during each forward, allowing for unbiased gradients to flow through the other weights. Controlling the amount of noise and its form allows for extreme compression rates while maintaining the performance of the original model. As a result we establish new state-of-the-art compromises between accuracy and model size both in natural language processing and image classification. For example, applying our method to state-of-the-art Transformer and ConvNet architectures, we can achieve 82.5% accuracy on MNLI by compressing RoBERTa to 14MB and 80.0 top-1 accuracy on ImageNet by compressing an EfficientNet-B3 to 3.3MB.
Document type :
Preprints, Working Papers, ...
Complete list of metadata
Contributor : Rémi Gribonval Connect in order to contact the contributor
Submitted on : Tuesday, February 9, 2021 - 5:00:52 PM
Last modification on : Monday, January 10, 2022 - 2:56:02 PM

Links full text


  • HAL Id : hal-03136442, version 1
  • ARXIV : 2004.07320


Angela Fan, Pierre Stock, Benjamin Graham, Edouard Grave, Rémi Gribonval, et al.. Training with Quantization Noise for Extreme Model Compression. 2021. ⟨hal-03136442⟩



Les métriques sont temporairement indisponibles