Home

News

  • We presented a paper about Corresponding Projections for Orphan Screening at the ML4H workshop at NeurIPS’18 (A*, top 4%), a pre-print of the paper is online.
  • Our paper on Efficient Decentralized Deep Learning by Dynamic Model Averaging got accepted at ECML PKDD’18 (A, top 18%), a pre-print of the paper is online (joint work with colleagues from Volkswagen Group Research).
  • I am co-chair of the workshop on Decentralized Machine Learning at the Edge (DMLE’18) at this years ECML PKDD the largest European conference on Machine Learning and Data Mining (A, top 18%). Proceedings will be available online for free at ceur-ws.org.
  • We presented a paper on Effective Parallelisation for Machine Learning at NIPS’17 (A*, top 4%), the print version and a teaser video are online (joint work with colleagues and friends from the University of Nottingham, Max Planck Institute for Informatics, and GOOGLE).
  • We published a paper on Co-regularised support vector regression at ECML PKDD’17 (A, top 18%), the proceedings of which are now officially online. A preliminary version of that paper received the Best Paper Award from the Data Mining in Biomedical Informatics and Healthcare workshop’16 and was published in the ICDM workshop proceedings (joint work with colleagues and friends from the University of Nottingham, and Bonn-Aachen International Center for Information Technology).

Research Interests

My main research interests are efficient parallelizations for machine learning and data mining algorithms. Many of today’s parallel machine learning algorithms were developed for tightly coupled systems like computing clusters or clouds. However, the volumes of data generated from machine-to-machine interaction, by mobile phones or autonomous vehicles, surpass the amount of data that can be realistically centralized. Thus, traditional cloud computing approaches are rendered infeasible. To scale parallel machine learning to such volumes of data, computation needs to be pushed towards  the data generating devices. An efficient parallelization is able to scale a machine learning algorithm – or better a class of algorithms – to large numbers of parallel instances, thereby achieving a substantial speed-up. At the same time, the resulting model has a similar quality than a hypothetical centrally computed one. I’m interested both in parallelizations for classical machine learning algorithms from batch data, as well as online learning / optimization algorithms. The latter algorithms are especially suited for distributed / decentralized learning from data streams. The approaches I’m seeking to parallelize are often based on linear models or kernel methods. Recently I have started to look into decentralized deep learning. Application areas which I am often considering when looking for novel machine learning challenges are real-time services, financial analysis, and chemoinformatics.

Curriculum Vitae – Highlights

Since 2010 I’m a data scientist at Fraunhofer IAIS, where I now lead Fraunhofer’s part in the EU project DiSIEM, managing a small reseach team. Moreover, I’m a project-specific consultant and researcher, e.g., for Volkswagen, DHL, and Hussel, and I design and give industrial trainings. Since 2014 I’m simultaneously a doctoral researcher at the University of Bonn, teaching graduate labs and seminars, and supervising Master’s and Bachelor’s theses. Before that, I worked for 10 years as a software developer at the S4M – Solutions for Media GmbH.