- We are organizing the third version of our Workshop on Parallel, Distributed, and Federated Learning (PDFL’20) at ECMLPKDD 2020 (A, top 18%). Note that we have renamed the workshop from DMLE to PDFL to include a wider range of novel topics. We are looking forward to your contributions (CfP).
- I am co-chair of the 2nd International Workshop on Machine Learning for Cybersecurity (MLCS’20) at ECMLPKDD 2020 (A, top 18%). We are looking forward to your contributions (CfP).
- I am co-chair of the 2nd International Workshop on Data-Centric Dependability and Security (DCDS’20) at this years DSN (A, top 18%). We are looking forward to your contributions (CfP).
- My colleagues Linara Adilova, Julia Rosenzweig, and I presented our paper “Information-Theoretic Perspective of Federated Learning” in the workshop on Information Theory and Machine Learning at NeurIPS 2019 (A*, top 4%).
- My colleagues Henning Petzka, Linara Adilova, Cristian Sminchisescu, and I presented our paper “A Reparameterization-Invariant Flatness Measure for Deep Neural Networks” in the Science meets Engineering of Deep Learning workshop at NeurIPS 2019 (A*, top 4%).
- I recieved the NeurIPS 2019 and ICML 2019 best reviewers award.
- My colleague Pascal Welke and I supervised two seminars on learning theory in the last two semesters in which the students studied the book “Understanding Machine Learning: from Theory to Algorithms” by Shai Shalev-Shwartz and Shai Ben-David. This resulted in two great summaries, one on the theoretical part, and one on the algorithmic part. Really great work by all the students!
My main research interests are efficient parallelizations for machine learning and data mining algorithms. Many of today’s parallel machine learning algorithms were developed for tightly coupled systems like computing clusters or clouds. However, the volumes of data generated from machine-to-machine interaction, by mobile phones or autonomous vehicles, surpass the amount of data that can be realistically centralized. Thus, traditional cloud computing approaches are rendered infeasible. To scale parallel machine learning to such volumes of data, computation needs to be pushed towards the data generating devices. An efficient parallelization is able to scale a machine learning algorithm – or better a class of algorithms – to large numbers of parallel instances, thereby achieving a substantial speed-up. At the same time, the resulting model has a similar quality than a hypothetical centrally computed one. I’m interested both in parallelizations for classical machine learning algorithms from batch data, as well as online learning / optimization algorithms. The latter algorithms are especially suited for distributed / decentralized learning from data streams. The approaches I’m seeking to parallelize are often based on linear models or kernel methods, as well as decentralized deep learning. I am also working on the theoretical foundations of deep learning, interpretability, informed machine learning, and multi-view, semi-supervised machine learning. Application areas which I am often considering when looking for novel machine learning challenges include autonomous driving, cybersecurity, real-time services, financial analysis, and chemoinformatics.
Curriculum Vitae – Highlights
I am a postdoctoral research fellow at Monash University. From 2011 to 2019 I was a data scientist at Fraunhofer IAIS, where I lead Fraunhofer’s part in the EU project DiSIEM, managing a small research team. Moreover, I was a project-specific consultant and researcher, e.g., for Volkswagen, DHL, and Hussel, and I designed and gave industrial trainings. Since 2014 I was simultaneously a doctoral researcher at the University of Bonn, teaching graduate labs and seminars, and supervising Master’s and Bachelor’s theses. Before that, I worked for 10 years as a software developer.