2020 |
Petzka, Henning; Adilova, Linara; Kamp, Michael; Sminchisescu, Cristian Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks (Unpublished) 2020. (Links | BibTeX | Tags: deep learning, flatness, generalization, learning theory, loss surface, neural networks, robustness) @unpublished{petzka2020feature, title = {Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks}, author = {Henning Petzka and Linara Adilova and Michael Kamp and Cristian Sminchisescu}, url = {http://michaelkamp.org/wp-content/uploads/2020/01/flatnessFeatureRobustnessGeneralization.pdf}, year = {2020}, date = {2020-01-01}, journal = {arXiv preprint arXiv:2001.00939}, keywords = {deep learning, flatness, generalization, learning theory, loss surface, neural networks, robustness}, pubstate = {published}, tppubtype = {unpublished} } |
2019 |
Kamp, Michael Black-Box Parallelization for Machine Learning (PhD Thesis) Universitäts-und Landesbibliothek Bonn, 2019. (Abstract | Links | BibTeX | Tags: averaging, black-box, communication-efficient, convex optimization, deep learning, distributed, dynamic averaging, federated, learning theory, machine learning, parallelization, privacy, radon machine) @phdthesis{kamp2019black, title = {Black-Box Parallelization for Machine Learning}, author = {Michael Kamp}, url = {https://d-nb.info/1200020057/34}, year = {2019}, date = {2019-01-01}, school = {Universitäts-und Landesbibliothek Bonn}, abstract = {The landscape of machine learning applications is changing rapidly: large centralized datasets are replaced by high volume, high velocity data streams generated by a vast number of geographically distributed, loosely connected devices, such as mobile phones, smart sensors, autonomous vehicles or industrial machines. Current learning approaches centralize the data and process it in parallel in a cluster or computing center. This has three major disadvantages: (i) it does not scale well with the number of data-generating devices since their growth exceeds that of computing centers, (ii) the communication costs for centralizing the data are prohibitive in many applications, and (iii) it requires sharing potentially privacy-sensitive data. Pushing computation towards the data-generating devices alleviates these problems and allows to employ their otherwise unused computing power. However, current parallel learning approaches are designed for tightly integrated systems with low latency and high bandwidth, not for loosely connected distributed devices. Therefore, I propose a new paradigm for parallelization that treats the learning algorithm as a black box, training local models on distributed devices and aggregating them into a single strong one. Since this requires only exchanging models instead of actual data, the approach is highly scalable, communication-efficient, and privacy-preserving. Following this paradigm, this thesis develops black-box parallelizations for two broad classes of learning algorithms. One approach can be applied to incremental learning algorithms, i.e., those that improve a model in iterations. Based on the utility of aggregations it schedules communication dynamically, adapting it to the hardness of the learning problem. In practice, this leads to a reduction in communication by orders of magnitude. It is analyzed for (i) online learning, in particular in the context of in-stream learning, which allows to guarantee optimal regret and for (ii) batch learning based on empirical risk minimization where optimal convergence can be guaranteed. The other approach is applicable to non-incremental algorithms as well. It uses a novel aggregation method based on the Radon point that allows to achieve provably high model quality with only a single aggregation. This is achieved in polylogarithmic runtime on quasi-polynomially many processors. This relates parallel machine learning to Nick’s class of parallel decision problems and is a step towards answering a fundamental open problem about the abilities and limitations of efficient parallel learning algorithms. An empirical study on real distributed systems confirms the potential of the approaches in realistic application scenarios.}, keywords = {averaging, black-box, communication-efficient, convex optimization, deep learning, distributed, dynamic averaging, federated, learning theory, machine learning, parallelization, privacy, radon machine}, pubstate = {published}, tppubtype = {phdthesis} } The landscape of machine learning applications is changing rapidly: large centralized datasets are replaced by high volume, high velocity data streams generated by a vast number of geographically distributed, loosely connected devices, such as mobile phones, smart sensors, autonomous vehicles or industrial machines. Current learning approaches centralize the data and process it in parallel in a cluster or computing center. This has three major disadvantages: (i) it does not scale well with the number of data-generating devices since their growth exceeds that of computing centers, (ii) the communication costs for centralizing the data are prohibitive in many applications, and (iii) it requires sharing potentially privacy-sensitive data. Pushing computation towards the data-generating devices alleviates these problems and allows to employ their otherwise unused computing power. However, current parallel learning approaches are designed for tightly integrated systems with low latency and high bandwidth, not for loosely connected distributed devices. Therefore, I propose a new paradigm for parallelization that treats the learning algorithm as a black box, training local models on distributed devices and aggregating them into a single strong one. Since this requires only exchanging models instead of actual data, the approach is highly scalable, communication-efficient, and privacy-preserving. Following this paradigm, this thesis develops black-box parallelizations for two broad classes of learning algorithms. One approach can be applied to incremental learning algorithms, i.e., those that improve a model in iterations. Based on the utility of aggregations it schedules communication dynamically, adapting it to the hardness of the learning problem. In practice, this leads to a reduction in communication by orders of magnitude. It is analyzed for (i) online learning, in particular in the context of in-stream learning, which allows to guarantee optimal regret and for (ii) batch learning based on empirical risk minimization where optimal convergence can be guaranteed. The other approach is applicable to non-incremental algorithms as well. It uses a novel aggregation method based on the Radon point that allows to achieve provably high model quality with only a single aggregation. This is achieved in polylogarithmic runtime on quasi-polynomially many processors. This relates parallel machine learning to Nick’s class of parallel decision problems and is a step towards answering a fundamental open problem about the abilities and limitations of efficient parallel learning algorithms. An empirical study on real distributed systems confirms the potential of the approaches in realistic application scenarios. |
Petzka, Henning; Adilova, Linara; Kamp, Michael; Sminchisescu, Cristian A Reparameterization-Invariant Flatness Measure for Deep Neural Networks (Workshop) Science meets Engineering of Deep Learning workshop at NeurIPS, 2019. (Links | BibTeX | Tags: deep learning, flatness, generalization, learning theory, loss surface, neural networks, robustness) @workshop{petzka2019reparameterization, title = {A Reparameterization-Invariant Flatness Measure for Deep Neural Networks}, author = {Henning Petzka and Linara Adilova and Michael Kamp and Cristian Sminchisescu}, url = {https://arxiv.org/pdf/1912.00058}, year = {2019}, date = {2019-01-01}, booktitle = {Science meets Engineering of Deep Learning workshop at NeurIPS}, keywords = {deep learning, flatness, generalization, learning theory, loss surface, neural networks, robustness}, pubstate = {published}, tppubtype = {workshop} } |
2018 |
Kamp, Michael; Adilova, Linara; Sicking, Joachim; Hüger, Fabian; Schlicht, Peter; Wirtz, Tim; Wrobel, Stefan Efficient Decentralized Deep Learning by Dynamic Model Averaging (Inproceedings) Machine Learning and Knowledge Discovery in Databases, Springer, 2018. (Abstract | Links | BibTeX | Tags: decentralized, deep learning, federated learning) @inproceedings{kamp2018efficient, title = {Efficient Decentralized Deep Learning by Dynamic Model Averaging}, author = {Michael Kamp and Linara Adilova and Joachim Sicking and Fabian Hüger and Peter Schlicht and Tim Wirtz and Stefan Wrobel}, url = {http://michaelkamp.org/wp-content/uploads/2018/07/commEffDeepLearning_extended.pdf}, year = {2018}, date = {2018-09-14}, booktitle = {Machine Learning and Knowledge Discovery in Databases}, publisher = {Springer}, abstract = {We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources. The proposed protocol allows to handle different phases of model training equally well and to quickly adapt to concept drifts. This leads to a reduction of communication by an order of magnitude compared to periodically communicating state-of-the-art approaches. Moreover, we derive a communication bound that scales well with the hardness of the serialized learning problem. The reduction in communication comes at almost no cost, as the predictive performance remains virtually unchanged. Indeed, the proposed protocol retains loss bounds of periodically averaging schemes. An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such as autonomous driving, or voice recognition and image classification on mobile phones.}, keywords = {decentralized, deep learning, federated learning}, pubstate = {published}, tppubtype = {inproceedings} } We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources. The proposed protocol allows to handle different phases of model training equally well and to quickly adapt to concept drifts. This leads to a reduction of communication by an order of magnitude compared to periodically communicating state-of-the-art approaches. Moreover, we derive a communication bound that scales well with the hardness of the serialized learning problem. The reduction in communication comes at almost no cost, as the predictive performance remains virtually unchanged. Indeed, the proposed protocol retains loss bounds of periodically averaging schemes. An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such as autonomous driving, or voice recognition and image classification on mobile phones. |
Publications
2020 |
Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks (Unpublished) 2020. |
2019 |
Black-Box Parallelization for Machine Learning (PhD Thesis) Universitäts-und Landesbibliothek Bonn, 2019. |
A Reparameterization-Invariant Flatness Measure for Deep Neural Networks (Workshop) Science meets Engineering of Deep Learning workshop at NeurIPS, 2019. |
2018 |
Efficient Decentralized Deep Learning by Dynamic Model Averaging (Inproceedings) Machine Learning and Knowledge Discovery in Databases, Springer, 2018. |