Communication-Efficient Distributed Online Learning with Kernels

Michael Kamp, Sebastian Bothe, Mario Boley, Michael Mock: Communication-Efficient Distributed Online Learning with Kernels. In: Frasconi, Paolo; Landwehr, Niels; Manco, Giuseppe; Vreeken, Jilles (Ed.): Machine Learning and Knowledge Discovery in Databases, pp. 805–819, Springer International Publishing, 2016.

Abstract

We propose an efficient distributed online learning protocol for low-latency real-time services. It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion. While such learners often achieve higher predictive performance than their linear counterparts, communicating the support vector expansions becomes inefficient for large numbers of support vectors. The proposed extension allows for a larger class of online learning algorithms—including those alleviating the problem above through model compression. In addition, we characterize the quality of the proposed protocol by introducing a novel criterion that requires the communication to be bounded by the loss suffered.

BibTeX (Download)

@inproceedings{kamp2016communication,
title = {Communication-Efficient Distributed Online Learning with Kernels},
author = {Michael Kamp and Sebastian Bothe and Mario Boley and Michael Mock},
editor = {Paolo Frasconi and Niels Landwehr and Giuseppe Manco and Jilles Vreeken},
url = {http://michaelkamp.org/wp-content/uploads/2020/03/Paper467.pdf},
year  = {2016},
date = {2016-09-16},
urldate = {2016-09-16},
booktitle = {Machine Learning and Knowledge Discovery in Databases},
pages = {805--819},
publisher = {Springer International Publishing},
abstract = {We propose an efficient distributed online learning protocol for low-latency real-time services. It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion. While such learners often achieve higher predictive performance than their linear counterparts, communicating the support vector expansions becomes inefficient for large numbers of support vectors. The proposed extension allows for a larger class of online learning algorithms—including those alleviating the problem above through model compression. In addition, we characterize the quality of the proposed protocol by introducing a novel criterion that requires the communication to be bounded by the loss suffered.},
keywords = {communication-efficient, distributed, dynamic averaging, federated learning, kernel methods, parallelization},
pubstate = {published},
tppubtype = {inproceedings}
}

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.