2023 |
Amr Abourayya Linara Adilova, Jianning Li FAM: Relative Flatness Aware Minimization (Proceedings Article) In: Proceedings of the ICML Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML), 2023. (Links | BibTeX | Tags: deep learning, flatness, generalization, machine learning, relative flatness, theory of deep learning) @inproceedings{adilova2023fam, |
Michael Kamp Linara Adilova, Gennady Andrienko Re-interpreting Rules Interpretability (Journal Article) In: International Journal of Data Science and Analytics, 2023. (BibTeX | Tags: interpretable, machine learning, rule learning, XAI) @article{adilova2023reinterpreting, |
Kamp, Michael; Fischer, Jonas; Vreeken, Jilles Federated Learning from Small Datasets (Proceedings Article) In: International Conference on Learning Representations (ICLR), 2023. (Links | BibTeX | Tags: black-box, black-box parallelization, daisy, daisy-chaining, FedDC, federated learning, small, small datasets) @inproceedings{kamp2023federated, |
David Kaltenpoth Osman Mian, Michael Kamp Nothing but Regrets - Privacy-Preserving Federated Causal Discovery (Proceedings Article) In: International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. (BibTeX | Tags: causal discovery, causality, explainable, federated, federated causal discovery, federated learning, interpretable) @inproceedings{mian2022nothing, |
Mian, Osman; Kamp, Michael; Vreeken, Jilles Information-Theoretic Causal Discovery and Intervention Detection over Multiple Environments (Proceedings Article) In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023. (BibTeX | Tags: causal discovery, causality, federated, federated causal discovery, federated learning, intervention) @inproceedings{mian2023informationb, |
Li, Jianning; Ferreira, André; Puladi, Behrus; Alves, Victor; Kamp, Michael; Kim, Moon; Nensa, Felix; Kleesiek, Jens; Ahmadi, Seyed-Ahmad; Egger, Jan Open-source skull reconstruction with MONAI (Journal Article) In: SoftwareX, vol. 23, pp. 101432, 2023. (BibTeX | Tags: ) @article{li2023open, |
Adilova, Linara; Chen, Siming; Kamp, Michael Informed Novelty Detection in Sequential Data by Per-Cluster Modeling (Proceedings Article) In: ICML workshop on Artificial Intelligence & Human Computer Interaction, 2023. @inproceedings{adilova2023informed, |
2022 |
Wang, Junhong; Li, Yun; Zhou, Zhaoyu; Wang, Chengshun; Hou, Yijie; Zhang, Li; Xue, Xiangyang; Kamp, Michael; Zhang, Xiaolong; Chen, Siming When, Where and How does it fail? A Spatial-temporal Visual Analytics Approach for Interpretable Object Detection in Autonomous Driving (Journal Article) In: IEEE Transactions on Visualization and Computer Graphics, 2022. (BibTeX | Tags: ) @article{wang2022and, |
Mian, Osman; Kaltenpoth, David; Kamp, Michael Regret-based Federated Causal Discovery (Proceedings Article) In: The KDD'22 Workshop on Causal Discovery, pp. 61–69, PMLR 2022. (BibTeX | Tags: ) @inproceedings{mian2022regret, |
2021 |
Petzka, Henning; Kamp, Michael; Adilova, Linara; Sminchisescu, Cristian; Boley, Mario Relative Flatness and Generalization (Proceedings Article) In: Advances in Neural Information Processing Systems, Curran Associates, Inc., 2021. (Abstract | BibTeX | Tags: deep learning, flatness, generalization, Hessian, learning theory, relative flatness, theory of deep learning) @inproceedings{petzka2021relative, Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks. While it has been empirically observed that flatness measures consistently correlate strongly with generalization, it is still an open theoretical problem why and under which circumstances flatness is connected to generalization, in particular in light of reparameterizations that change certain flatness measures but leave generalization unchanged. We investigate the connection between flatness and generalization by relating it to the interpolation from representative data, deriving notions of representativeness, and feature robustness. The notions allow us to rigorously connect flatness and generalization and to identify conditions under which the connection holds. Moreover, they give rise to a novel, but natural relative flatness measure that correlates strongly with generalization, simplifies to ridge regression for ordinary least squares, and solves the reparameterization issue. |
Linsner, Florian; Adilova, Linara; Däubener, Sina; Kamp, Michael; Fischer, Asja Approaches to Uncertainty Quantification in Federated Deep Learning (Workshop) Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2021, vol. 2, Springer, 2021. (Links | BibTeX | Tags: federated learning, uncertainty) @workshop{linsner2021uncertainty, |
Li, Xiaoxiao; Jiang, Meirui; Zhang, Xiaofei; Kamp, Michael; Dou, Qi FedBN: Federated Learning on Non-IID Features via Local Batch Normalization (Proceedings Article) In: Proceedings of the 9th International Conference on Learning Representations (ICLR), 2021. (Abstract | Links | BibTeX | Tags: batch normalization, black-box parallelization, deep learning, federated learning) @inproceedings{li2021fedbn, The emerging paradigm of federated learning (FL) strives to enable collaborative training of deep models on the network edge without centrally aggregating raw data and hence improving data privacy. In most cases, the assumption of independent and identically distributed samples across local clients does not hold for federated learning setups. Under this setting, neural network training performance may vary significantly according to the data distribution and even hurt training convergence. Most of the previous work has focused on a difference in the distribution of labels or client shifts. Unlike those settings, we address an important problem of FL, e.g., different scanners/sensors in medical imaging, different scenery distribution in autonomous driving (highway vs. city), where local clients store examples with different distributions compared to other clients, which we denote as feature shift non-iid. In this work, we propose an effective method that uses local batch normalization to alleviate the feature shift before averaging models. The resulting scheme, called FedBN, outperforms both classical FedAvg, as well as the state-of-the-art for non-iid data (FedProx) on our extensive experiments. These empirical results are supported by a convergence analysis that shows in a simplified setting that FedBN has a faster convergence rate than FedAvg. Code is available at https://github.com/med-air/FedBN. |
2020 |
Heppe, Lukas; Kamp, Michael; Adilova, Linara; Piatkowski, Nico; Heinrich, Danny; Morik, Katharina Resource-Constrained On-Device Learning by Dynamic Averaging (Workshop) Proceedings of the Workshop on Parallel, Distributed, and Federated Learning (PDFL) at ECMLPKDD, 2020. (Abstract | Links | BibTeX | Tags: black-box parallelization, distributed learning, edge computing, embedded, exponential family, FPGA, resource-efficient) @workshop{heppe2020resource, The communication between data-generating devices is partially responsible for a growing portion of the world’s power consumption. Thus reducing communication is vital, both, from an economical and an ecological perspective. For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially. Furthermore, not centralizing the data protects privacy-sensitive data. However, most learning algorithms require hardware with high computation power and thus high energy consumption. In contrast, ultra-low-power processors, like FPGAs or micro-controllers, allow for energy-efficient learning of local models. Combined with communication-efficient distributed learning strategies, this reduces the overall energy consumption and enables applications that were yet impossible due to limited energy on local devices. The major challenge is then, that the low-power processors typically only have integer processing capabilities. This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power processors, is privacy-preserving, and effectively minimizes communication. The empirical evaluation shows that the approach can reach a model quality comparable to a centrally learned regular model with an order of magnitude less communication. Comparing the overall energy consumption, this reduces the required energy for solving the machine learning task by a significant amount. |
Petzka, Henning; Adilova, Linara; Kamp, Michael; Sminchisescu, Cristian Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks (Workshop) 2020. (Links | BibTeX | Tags: deep learning, flatness, generalization, learning theory, loss surface, neural networks, robustness) @workshop{petzka2020feature, |
Welke, Pascal; Seiffarth, Florian; Kamp, Michael; Wrobel, Stefan HOPS: Probabilistic Subtree Mining for Small and Large Graphs (Proceedings Article) In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1275–1284, Association for Computing Machinery, Virtual Event, CA, USA, 2020, ISBN: 9781450379984. (Abstract | Links | BibTeX | Tags: ) @inproceedings{10.1145/3394486.3403180, Frequent subgraph mining, i.e., the identification of relevant patterns in graph databases, is a well-known data mining problem with high practical relevance, since next to summarizing the data, the resulting patterns can also be used to define powerful domain-specific similarity functions for prediction. In recent years, significant progress has been made towards subgraph mining algorithms that scale to complex graphs by focusing on tree patterns and probabilistically allowing a small amount of incompleteness in the result. Nonetheless, the complexity of the pattern matching component used for deciding subtree isomorphism on arbitrary graphs has significantly limited the scalability of existing approaches. In this paper, we adapt sampling techniques from mathematical combinatorics to the problem of probabilistic subtree mining in arbitrary databases of many small to medium-size graphs or a single large graph. By restricting on tree patterns, we provide an algorithm that approximately counts or decides subtree isomorphism for arbitrary transaction graphs in sub-linear time with one-sided error. Our empirical evaluation on a range of benchmark graph datasets shows that the novel algorithm substantially outperforms state-of-the-art approaches both in the task of approximate counting of embeddings in single large graphs and in probabilistic frequent subtree mining in large databases of small to medium sized graphs. |
2019 |
Kamp, Michael Black-Box Parallelization for Machine Learning (PhD Thesis) Universitäts-und Landesbibliothek Bonn, 2019. (Abstract | Links | BibTeX | Tags: averaging, black-box, communication-efficient, convex optimization, deep learning, distributed, dynamic averaging, federated, learning theory, machine learning, parallelization, privacy, radon machine) @phdthesis{kamp2019black, The landscape of machine learning applications is changing rapidly: large centralized datasets are replaced by high volume, high velocity data streams generated by a vast number of geographically distributed, loosely connected devices, such as mobile phones, smart sensors, autonomous vehicles or industrial machines. Current learning approaches centralize the data and process it in parallel in a cluster or computing center. This has three major disadvantages: (i) it does not scale well with the number of data-generating devices since their growth exceeds that of computing centers, (ii) the communication costs for centralizing the data are prohibitive in many applications, and (iii) it requires sharing potentially privacy-sensitive data. Pushing computation towards the data-generating devices alleviates these problems and allows to employ their otherwise unused computing power. However, current parallel learning approaches are designed for tightly integrated systems with low latency and high bandwidth, not for loosely connected distributed devices. Therefore, I propose a new paradigm for parallelization that treats the learning algorithm as a black box, training local models on distributed devices and aggregating them into a single strong one. Since this requires only exchanging models instead of actual data, the approach is highly scalable, communication-efficient, and privacy-preserving. Following this paradigm, this thesis develops black-box parallelizations for two broad classes of learning algorithms. One approach can be applied to incremental learning algorithms, i.e., those that improve a model in iterations. Based on the utility of aggregations it schedules communication dynamically, adapting it to the hardness of the learning problem. In practice, this leads to a reduction in communication by orders of magnitude. It is analyzed for (i) online learning, in particular in the context of in-stream learning, which allows to guarantee optimal regret and for (ii) batch learning based on empirical risk minimization where optimal convergence can be guaranteed. The other approach is applicable to non-incremental algorithms as well. It uses a novel aggregation method based on the Radon point that allows to achieve provably high model quality with only a single aggregation. This is achieved in polylogarithmic runtime on quasi-polynomially many processors. This relates parallel machine learning to Nick’s class of parallel decision problems and is a step towards answering a fundamental open problem about the abilities and limitations of efficient parallel learning algorithms. An empirical study on real distributed systems confirms the potential of the approaches in realistic application scenarios. |
Adilova, Linara; Natious, Livin; Chen, Siming; Thonnard, Olivier; Kamp, Michael System Misuse Detection via Informed Behavior Clustering and Modeling (Workshop) 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), IEEE 2019. (Links | BibTeX | Tags: anomaly detection, cybersecurity, DiSIEM, security, user behavior modelling, visualization) @workshop{adilova2019system, |
Petzka, Henning; Adilova, Linara; Kamp, Michael; Sminchisescu, Cristian A Reparameterization-Invariant Flatness Measure for Deep Neural Networks (Workshop) Science meets Engineering of Deep Learning workshop at NeurIPS, 2019. (Links | BibTeX | Tags: deep learning, flatness, generalization, learning theory, loss surface, neural networks, robustness) @workshop{petzka2019reparameterization, |
Adilova, Linara; Rosenzweig, Julia; Kamp, Michael Information Theoretic Perspective of Federated Learning (Workshop) NeurIPS Workshop on Information Theory and Machine Learning, 2019. @workshop{adilova2019information, |
2018 |
Giesselbach, Sven; Ullrich, Katrin; Kamp, Michael; Paurat, Daniel; Gärtner, Thomas Corresponding Projections for Orphan Screening (Workshop) Proceedings of the ML4H workshop at NeurIPS, 2018. (Links | BibTeX | Tags: corresponding projections, transfer learning, unsupervised) @workshop{giesselbach2018corresponding, |
Nguyen, Phong H.; Chen, Siming; Andrienko, Natalia; Kamp, Michael; Adilova, Linara; Andrienko, Gennady; Thonnard, Olivier; Bessani, Alysson; Turkay, Cagatay Designing Visualisation Enhancements for SIEM Systems (Workshop) 15th IEEE Symposium on Visualization for Cyber Security – VizSec, 2018. (Links | BibTeX | Tags: DiSIEM, SIEM, visual analytics, visualization) @workshop{phong2018designing, |
Kamp, Michael; Adilova, Linara; Sicking, Joachim; HĂĽger, Fabian; Schlicht, Peter; Wirtz, Tim; Wrobel, Stefan Efficient Decentralized Deep Learning by Dynamic Model Averaging (Proceedings Article) In: Machine Learning and Knowledge Discovery in Databases, Springer, 2018. (Abstract | Links | BibTeX | Tags: decentralized, deep learning, federated learning) @inproceedings{kamp2018efficient, We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources. The proposed protocol allows to handle different phases of model training equally well and to quickly adapt to concept drifts. This leads to a reduction of communication by an order of magnitude compared to periodically communicating state-of-the-art approaches. Moreover, we derive a communication bound that scales well with the hardness of the serialized learning problem. The reduction in communication comes at almost no cost, as the predictive performance remains virtually unchanged. Indeed, the proposed protocol retains loss bounds of periodically averaging schemes. An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such as autonomous driving, or voice recognition and image classification on mobile phones. |
2017 |
Gunar Ernis, Michael Kamp Machine Learning fĂĽr die smarte Produktion (Journal Article) In: VDMA-Nachrichten, pp. 36-37, 2017. (Links | BibTeX | Tags: industry 4.0, machine learning, smart production) @article{kamp2017machine, |
Flouris, Ioannis; Giatrakos, Nikos; Deligiannakis, Antonios; Garofalakis, Minos; Kamp, Michael; Mock, Michael Issues in Complex Event Processing: Status and Prospects in the Big Data Era (Journal Article) In: Journal of Systems and Software, 2017. (BibTeX | Tags: ) @article{flouris2016issues, |
Kamp, Michael; Boley, Mario; Missura, Olana; Gärtner, Thomas Effective Parallelisation for Machine Learning (Proceedings Article) In: Advances in Neural Information Processing Systems, pp. 6480–6491, 2017. (Abstract | Links | BibTeX | Tags: decentralized, distributed, machine learning, parallelization, radon) @inproceedings{kamp2017effective, We present a novel parallelisation scheme that simplifies the adaptation of learning algorithms to growing amounts of data as well as growing needs for accurate and confident predictions in critical applications. In contrast to other parallelisation techniques, it can be applied to a broad class of learning algorithms without further mathematical derivations and without writing dedicated code, while at the same time maintaining theoretical performance guarantees. Moreover, our parallelisation scheme is able to reduce the runtime of many learning algorithms to polylogarithmic time on quasi-polynomially many processing units. This is a significant step towards a general answer to an open question on efficient parallelisation of machine learning algorithms in the sense of Nick's Class (NC). The cost of this parallelisation is in the form of a larger sample complexity. Our empirical study confirms the potential of our parallelisation scheme with fixed numbers of processors and instances in realistic application scenarios. |
Ullrich, Katrin; Kamp, Michael; Gärtner, Thomas; Vogt, Martin; Wrobel, Stefan Co-regularised support vector regression (Proceedings Article) In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 338–354, Springer 2017. (Links | BibTeX | Tags: co-regularization, transfer learning, unsupervised) @inproceedings{ullrich2017co, |
2016 |
Kamp, Michael; Bothe, Sebastian; Boley, Mario; Mock, Michael Communication-Efficient Distributed Online Learning with Kernels (Proceedings Article) In: Frasconi, Paolo; Landwehr, Niels; Manco, Giuseppe; Vreeken, Jilles (Ed.): Machine Learning and Knowledge Discovery in Databases, pp. 805–819, Springer International Publishing, 2016. (Abstract | Links | BibTeX | Tags: communication-efficient, distributed, dynamic averaging, federated learning, kernel methods, parallelization) @inproceedings{kamp2016communication, We propose an efficient distributed online learning protocol for low-latency real-time services. It extends a previously presented protocol to kernelized online learners that represent their models by a support vector expansion. While such learners often achieve higher predictive performance than their linear counterparts, communicating the support vector expansions becomes inefficient for large numbers of support vectors. The proposed extension allows for a larger class of online learning algorithms—including those alleviating the problem above through model compression. In addition, we characterize the quality of the proposed protocol by introducing a novel criterion that requires the communication to be bounded by the loss suffered. |
Ullrich, Katrin; Kamp, Michael; Gärtner, Thomas; Vogt, Martin; Wrobel, Stefan Ligand-based virtual screening with co-regularised support Vector Regression (Proceedings Article) In: 2016 IEEE 16th international conference on data mining workshops (ICDMW), pp. 261–268, IEEE 2016. (Abstract | Links | BibTeX | Tags: biology, chemistry, corresponding projections, semi-supervised) @inproceedings{ullrich2016ligand, We consider the problem of ligand affinity prediction as a regression task, typically with few labelled examples, many unlabelled instances, and multiple views on the data. In chemoinformatics, the prediction of binding affinities for protein ligands is an important but also challenging task. As protein-ligand bonds trigger biochemical reactions, their characterisation is a crucial step in the process of drug discovery and design. However, the practical determination of ligand affinities is very expensive, whereas unlabelled compounds are available in abundance. Additionally, many different vectorial representations for compounds (molecular fingerprints) exist that cover different sets of features. To this task we propose to apply a co-regularisation approach, which extracts information from unlabelled examples by ensuring that individual models trained on different fingerprints make similar predictions. We extend support vector regression similarly to the existing co-regularised least squares regression (CoRLSR) and obtain a co-regularised support vector regression (CoSVR). We empirically evaluate the performance of CoSVR on various protein-ligand datasets. We show that CoSVR outperforms CoRLSR as well as existing state-of-the-art approaches that do not take unlabelled molecules into account. Additionally, we provide a theoretical bound on the Rademacher complexity for CoSVR. |
2015 |
Kamp, Michael; Boley, Mario; Gärtner, Thomas Parallelizing Randomized Convex Optimization (Workshop) Proceedings of the 8th NIPS Workshop on Optimization for Machine Learning, 2015. @workshop{kamp2015parallelizing, |
2014 |
Boley, Thomas Gärtner Michael Kamp Mario Beating Human Analysts in Nowcasting Corporate Earnings by using Publicly Available Stock Price and Correlation Features (Proceedings Article) In: Proceedings of the SIAM International Conference on Data Mining, pp. 641–649, SIAM 2014. @inproceedings{michael2014beating, |
Kamp, Michael; Boley, Mario; Keren, Daniel; Schuster, Assaf; Sharfman, Izchak Communication-Efficient Distributed Online Prediction by Dynamic Model Synchronization (Proceedings Article) In: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery (ECMLPKDD), Springer 2014. (BibTeX | Tags: ) @inproceedings{kamp2014communication, |
Kamp, Michael; Boley, Mario; Mock, Michael; Keren, Daniel; Schuster, Assaf; Sharfman, Izchak Adaptive Communication Bounds for Distributed Online Learning (Workshop) Proceedings of the 7th NIPS Workshop on Optimization for Machine Learning, 2014. @workshop{kamp2014adaptive, |
2013 |
Kamp, Michael; Kopp, Christine; Mock, Michael; Boley, Mario; May, Michael Privacy-preserving mobility monitoring using sketches of stationary sensor readings (Proceedings Article) In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 370–386, Springer 2013. (BibTeX | Tags: ) @inproceedings{kamp2013privacy, |
Kamp, Michael; Boley, Mario; Gärtner, Thomas 2013 IEEE 13th International Conference on Data Mining Workshops, IEEE 2013. (BibTeX | Tags: ) @workshop{kamp2013beating, |
Boley, Mario; Kamp, Michael; Keren, Daniel; Schuster, Assaf; Sharfman, Izchak Communication-Efficient Distributed Online Prediction using Dynamic Model Synchronizations. (Workshop) First Internation Workshop on Big Dynamic Distributed Data (BD3) at the Internation Conference on Very Large Data Bases (VLDB), 2013. (BibTeX | Tags: ) @workshop{boley2013communication, |
Kamp, Michael; Manea, Andrei STONES: Stochastic Technique for Generating Songs (Workshop) Proceedings of the NIPS Workshop on Constructive Machine Learning (CML), 2013. (BibTeX | Tags: ) @workshop{kamp2013stones, |
Publications
2023 |
FAM: Relative Flatness Aware Minimization (Proceedings Article) In: Proceedings of the ICML Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML), 2023. |
Re-interpreting Rules Interpretability (Journal Article) In: International Journal of Data Science and Analytics, 2023. |
Federated Learning from Small Datasets (Proceedings Article) In: International Conference on Learning Representations (ICLR), 2023. |
Nothing but Regrets - Privacy-Preserving Federated Causal Discovery (Proceedings Article) In: International Conference on Artificial Intelligence and Statistics (AISTATS), 2023. |
Information-Theoretic Causal Discovery and Intervention Detection over Multiple Environments (Proceedings Article) In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023. |
Open-source skull reconstruction with MONAI (Journal Article) In: SoftwareX, vol. 23, pp. 101432, 2023. |
Informed Novelty Detection in Sequential Data by Per-Cluster Modeling (Proceedings Article) In: ICML workshop on Artificial Intelligence & Human Computer Interaction, 2023. |
2022 |
When, Where and How does it fail? A Spatial-temporal Visual Analytics Approach for Interpretable Object Detection in Autonomous Driving (Journal Article) In: IEEE Transactions on Visualization and Computer Graphics, 2022. |
Regret-based Federated Causal Discovery (Proceedings Article) In: The KDD'22 Workshop on Causal Discovery, pp. 61–69, PMLR 2022. |
2021 |
Relative Flatness and Generalization (Proceedings Article) In: Advances in Neural Information Processing Systems, Curran Associates, Inc., 2021. |
Approaches to Uncertainty Quantification in Federated Deep Learning (Workshop) Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2021, vol. 2, Springer, 2021. |
FedBN: Federated Learning on Non-IID Features via Local Batch Normalization (Proceedings Article) In: Proceedings of the 9th International Conference on Learning Representations (ICLR), 2021. |
2020 |
Resource-Constrained On-Device Learning by Dynamic Averaging (Workshop) Proceedings of the Workshop on Parallel, Distributed, and Federated Learning (PDFL) at ECMLPKDD, 2020. |
Feature-Robustness, Flatness and Generalization Error for Deep Neural Networks (Workshop) 2020. |
HOPS: Probabilistic Subtree Mining for Small and Large Graphs (Proceedings Article) In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1275–1284, Association for Computing Machinery, Virtual Event, CA, USA, 2020, ISBN: 9781450379984. |
2019 |
Black-Box Parallelization for Machine Learning (PhD Thesis) Universitäts-und Landesbibliothek Bonn, 2019. |
System Misuse Detection via Informed Behavior Clustering and Modeling (Workshop) 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), IEEE 2019. |
A Reparameterization-Invariant Flatness Measure for Deep Neural Networks (Workshop) Science meets Engineering of Deep Learning workshop at NeurIPS, 2019. |
Information Theoretic Perspective of Federated Learning (Workshop) NeurIPS Workshop on Information Theory and Machine Learning, 2019. |
2018 |
Corresponding Projections for Orphan Screening (Workshop) Proceedings of the ML4H workshop at NeurIPS, 2018. |
Designing Visualisation Enhancements for SIEM Systems (Workshop) 15th IEEE Symposium on Visualization for Cyber Security – VizSec, 2018. |
Efficient Decentralized Deep Learning by Dynamic Model Averaging (Proceedings Article) In: Machine Learning and Knowledge Discovery in Databases, Springer, 2018. |
2017 |
Machine Learning fĂĽr die smarte Produktion (Journal Article) In: VDMA-Nachrichten, pp. 36-37, 2017. |
Issues in Complex Event Processing: Status and Prospects in the Big Data Era (Journal Article) In: Journal of Systems and Software, 2017. |
Effective Parallelisation for Machine Learning (Proceedings Article) In: Advances in Neural Information Processing Systems, pp. 6480–6491, 2017. |
Co-regularised support vector regression (Proceedings Article) In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 338–354, Springer 2017. |
2016 |
Communication-Efficient Distributed Online Learning with Kernels (Proceedings Article) In: Frasconi, Paolo; Landwehr, Niels; Manco, Giuseppe; Vreeken, Jilles (Ed.): Machine Learning and Knowledge Discovery in Databases, pp. 805–819, Springer International Publishing, 2016. |
Ligand-based virtual screening with co-regularised support Vector Regression (Proceedings Article) In: 2016 IEEE 16th international conference on data mining workshops (ICDMW), pp. 261–268, IEEE 2016. |
2015 |
Parallelizing Randomized Convex Optimization (Workshop) Proceedings of the 8th NIPS Workshop on Optimization for Machine Learning, 2015. |
2014 |
Beating Human Analysts in Nowcasting Corporate Earnings by using Publicly Available Stock Price and Correlation Features (Proceedings Article) In: Proceedings of the SIAM International Conference on Data Mining, pp. 641–649, SIAM 2014. |
Communication-Efficient Distributed Online Prediction by Dynamic Model Synchronization (Proceedings Article) In: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery (ECMLPKDD), Springer 2014. |
Adaptive Communication Bounds for Distributed Online Learning (Workshop) Proceedings of the 7th NIPS Workshop on Optimization for Machine Learning, 2014. |
2013 |
Privacy-preserving mobility monitoring using sketches of stationary sensor readings (Proceedings Article) In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 370–386, Springer 2013. |
2013 IEEE 13th International Conference on Data Mining Workshops, IEEE 2013. |
Communication-Efficient Distributed Online Prediction using Dynamic Model Synchronizations. (Workshop) First Internation Workshop on Big Dynamic Distributed Data (BD3) at the Internation Conference on Very Large Data Bases (VLDB), 2013. |
STONES: Stochastic Technique for Generating Songs (Workshop) Proceedings of the NIPS Workshop on Constructive Machine Learning (CML), 2013. |