Resource-Constrained On-Device Learning by Dynamic Averaging

Lukas Heppe, Michael Kamp, Linara Adilova, Nico Piatkowski, Danny Heinrich, Katharina Morik: Resource-Constrained On-Device Learning by Dynamic Averaging. Proceedings of the Workshop on Parallel, Distributed, and Federated Learning (PDFL) at ECMLPKDD, 2020.

Abstract

The communication between data-generating devices is partially responsible for a growing portion of the world’s power consumption. Thus reducing communication is vital, both, from an economical and an ecological perspective. For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially. Furthermore, not centralizing the data protects privacy-sensitive data. However, most learning algorithms require hardware with high computation power and thus high energy consumption. In contrast, ultra-low-power processors, like FPGAs or micro-controllers, allow for energy-efficient learning of local models. Combined with communication-efficient distributed learning strategies, this reduces the overall energy consumption and enables applications that were yet impossible due to limited energy on local devices. The major challenge is then, that the low-power processors typically only have integer processing capabilities. This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication. The empirical evaluation shows that the approach can reach a model quality comparable to a centrally learned regular model with an order of magnitude less communication. Comparing the overall energy consumption, this reduces the required energy for solving the machine learning task by a significant amount.

BibTeX (Download)

@workshop{heppe2020resource,
title = {Resource-Constrained On-Device Learning by Dynamic Averaging},
author = {Lukas Heppe and Michael Kamp and Linara Adilova and Nico Piatkowski and Danny Heinrich and Katharina Morik},
url = {https://michaelkamp.org/wp-content/uploads/2020/10/Resource_Constrained_Federated_Learning-1.pdf},
year  = {2020},
date = {2020-09-14},
urldate = {2020-09-14},
booktitle = {Proceedings of the Workshop on Parallel, Distributed, and Federated Learning (PDFL) at ECMLPKDD},
abstract = {The communication between data-generating devices is partially responsible for a growing portion of the world’s power consumption. Thus reducing communication is vital, both, from an economical and an ecological perspective. For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially. Furthermore, not centralizing the data protects privacy-sensitive data. However, most learning algorithms require hardware with high computation power and thus high energy consumption. In contrast, ultra-low-power processors, like FPGAs or micro-controllers, allow for energy-efficient learning of local models. Combined with communication-efficient distributed learning strategies, this reduces the overall energy consumption and enables applications that were yet impossible due to limited energy on local devices. The major challenge is then, that the low-power processors typically only have integer processing capabilities. This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication. The empirical evaluation shows that the approach can reach a model quality comparable to a centrally learned regular model with an order of magnitude less communication. Comparing the overall energy consumption, this reduces the required energy for solving the machine learning task by a significant amount.},
keywords = {black-box parallelization, distributed learning, edge computing, embedded, exponential family, FPGA, resource-efficient},
pubstate = {published},
tppubtype = {workshop}
}

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.