Distilled Hierarchical Neural Ensembles with Adaptive Inference Cost - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Distilled Hierarchical Neural Ensembles with Adaptive Inference Cost

Résumé

Deep neural networks form the basis of state-of-the-art models across a variety of application domains. Moreover, networks that are able to dynamically adapt the computational cost of inference are important in scenarios where the amount of compute or input data varies over time. In this paper, we propose Hierarchical Neural Ensembles (HNE), a novel framework to embed an ensemble of multiple networks by sharing intermediate layers using a hierarchical structure. In HNE we control the inference cost by evaluating only a subset of models, which are organized in a nested manner. Our second contribution is a novel co-distillation method to boost the performance of ensemble predictions with low inference cost. This approach leverages the nested structure of our ensembles, to optimally allocate accuracy and diversity across the ensemble members. Comprehensive experiments over the CIFAR and Ima-geNet datasets confirm the effectiveness of HNE in building deep networks with adaptive inference cost for image classification.
Fichier principal
Vignette du fichier
Distilled_Hierarchical_Neural_Ensembles_with_Adaptive_Inference_Cost__Arxiv_.pdf (2.26 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02500660 , version 1 (06-03-2020)

Identifiants

  • HAL Id : hal-02500660 , version 1

Citer

Adrià Ruiz, Jakob Verbeek. Distilled Hierarchical Neural Ensembles with Adaptive Inference Cost. 2020. ⟨hal-02500660⟩
238 Consultations
417 Téléchargements

Partager

Gmail Facebook X LinkedIn More