Distributed deep learning on edge-devices: feasibility via adaptive compression

Corentin Hardy 1, 2 Erwan Le Merrer 1 Bruno Sericola 2
2 DIONYSOS - Dependability Interoperability and perfOrmance aNalYsiS Of networkS
Inria Rennes – Bretagne Atlantique , IRISA_D2 - RÉSEAUX, TÉLÉCOMMUNICATION ET SERVICES
Abstract : A large portion of data mining and analytic services use modern machine learning techniques, such as deep learning. The state-of-the-art results by deep learning come at the price of an intensive use of computing resources. The leading frameworks (e.g., TensorFlow) are executed on GPUs or on high-end servers in datacenters. On the other end, there is a proliferation of personal devices with possibly free CPU cycles; this can enable services to run in users' homes, embedding machine learning operations. In this paper, we ask the following question: Is distributed deep learning computation on WAN connected devices feasible, in spite of the traffic caused by learning tasks? We show that such a setup rises some important challenges, most notably the ingress traffic that the servers hosting the up-to-date model have to sustain. In order to reduce this stress, we propose AdaComp, a novel algorithm for compressing worker updates to the model on the server. Applicable to stochastic gradient descent based approaches, it combines efficient gradient selection and learning rate modulation. We then experiment and measure the impact of compression, device heterogeneity and reliability on the accuracy of learned models, with an emulator platform that embeds TensorFlow into Linux containers. We report a reduction of the total amount of data sent by workers to the server by two order of magnitude (e.g., 191-fold reduction for a convolutional network on the MNIST dataset), when compared to a standard asynchronous stochastic gradient descent, while preserving model accuracy.
Type de document :
Communication dans un congrès
IEEE NCA 2017 - 16th IEEE International Symposium on Network Computing and Applications, Oct 2017, Boston, United States. 2017
Liste complète des métadonnées

Littérature citée [25 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01650936
Contributeur : Corentin Hardy <>
Soumis le : mardi 28 novembre 2017 - 15:05:44
Dernière modification le : mercredi 16 mai 2018 - 11:24:13

Fichier

Deep_Learning_on_edge_devices_...
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01650936, version 1

Citation

Corentin Hardy, Erwan Le Merrer, Bruno Sericola. Distributed deep learning on edge-devices: feasibility via adaptive compression. IEEE NCA 2017 - 16th IEEE International Symposium on Network Computing and Applications, Oct 2017, Boston, United States. 2017. 〈hal-01650936〉

Partager

Métriques

Consultations de la notice

332

Téléchargements de fichiers

43