Training on the Edge: The why and the how

Abstract : Edge computing is the natural progression from Cloud computing, where, instead of collecting all data and processing it centrally, like in a cloud computing environment, we distribute the computing power and try to do as much processing as possible, close to the source of the data. There are various reasons this model is being adopted quickly, including privacy, and reduced power and bandwidth requirements on the Edge nodes. While it is common to see inference being done on Edge nodes today, it is much less common to do training on the Edge. The reasons for this range from computational limitations, to it not being advantageous in reducing communications between the Edge nodes. In this paper, we explore some scenarios where it is advantageous to do training on the Edge, as well as the use of checkpointing strategies to save memory.
Complete list of metadatas

Cited literature [3 references]  Display  Hide  Download

https://hal.inria.fr/hal-02069728
Contributor : Olivier Beaumont <>
Submitted on : Friday, March 15, 2019 - 7:41:26 PM
Last modification on : Tuesday, April 2, 2019 - 1:45:33 AM

File

1903.03051.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02069728, version 1

Citation

Navjot Kukreja, Alena Shilova, Olivier Beaumont, Jan Hückelheim, Nicola Ferrier, et al.. Training on the Edge: The why and the how. PAISE2019 - 1st Workshop on Parallel AI and Systems for the Edge, May 2019, Rio de Janeiro, Brazil. ⟨hal-02069728⟩

Share

Metrics

Record views

46

Files downloads

57