QIM: Quantifying Hyperparameter Importance for Deep Learning - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

QIM: Quantifying Hyperparameter Importance for Deep Learning

Résumé

Recently, Deep Learning (DL) has become super hot because it achieves breakthroughs in many areas such as image processing and face identification. The performance of DL models critically depend on hyperparameter settings. However, existing approaches that quantify the importance of these hyperparameters are time-consuming.In this paper, we propose a fast approach to quantify the importance of the DL hyperparameters, called QIM. It leverages Plackett-Burman design to collect as few as possible data but can still correctly quantify the hyperparameter importance. We conducted experiments on the popular deep learning framework – Caffe – with different datasets to evaluate QIM. The results show that QIM can rank the importance of the DL hyperparameters correctly with very low cost.
Fichier principal
Vignette du fichier
432484_1_En_15_Chapter.pdf (794.28 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01648007 , version 1 (24-11-2017)

Licence

Paternité

Identifiants

Citer

Dan Jia, Rui Wang, Chengzhong Xu, Zhibin Yu. QIM: Quantifying Hyperparameter Importance for Deep Learning. 13th IFIP International Conference on Network and Parallel Computing (NPC), Oct 2016, Xi'an, China. pp.180-188, ⟨10.1007/978-3-319-47099-3_15⟩. ⟨hal-01648007⟩
162 Consultations
231 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More