© TUHH, Martin Kunze



Vom 25. bis 27. September 2023 finden auf dem Campus der Technischen Universität Hamburg (TUHH) zum dritten Mal die MLE Days für maschinelles Lernen im Ingenieurswesen statt. In diesem Jahr wird eine Summer School mit einer eintägigen Startup-Challenge kombiniert. Die Teilnahme ist kostenlos.

Die Summer School vermittelt Einblicke in die Welt des maschinellen Lernens mit einem Fokus auf das Ingenieurswesen. Sie bietet Sessions zu Grundlagen des maschinellen Lernens, konkrete Anwendungsbeispielen sowie Hands-on-Sessions zum Ausprobieren und Vertiefen des Gelernten. Keynote-Vorträge runden das Programm ab. Es werden drei parallele Tracks angeboten, aus denen die Teilnehmenden Beiträge auswählen können, die ihren Interessen und ihrer Erfahrung mit maschinellem Lernen entsprechen. Die Anwendungsfälle reichen von Sensordaten- und Bildverarbeitung über Elektrotechnik und Materialwissenschaften bis hin zu Luftfahrt und maritimer Logistik. Eine Poster-Session und ein Elevator Pitch Event ermöglichen den Teilnehmenden ihre eigenen Arbeiten zu Themen des maschinellen Lernens vorzustellen. Die besten Poster und Pitches werden von einer Jury ausgewählt und mit Preisen ausgezeichnet. Bei einer Networking-Veranstaltung können die Teilnehmenden Kontakte zu ausgewählten Unternehmenspartnern und Sponsoren knüpfen - von Start-ups über mittelständische Unternehmen bis hin zu Großkonzernen. In der Startup-Challenge lernen die Teilnehmenden, wie sie aus Ideen zum maschinellen Lernen ein Unternehmen machen.

Weitere Informationen:

Train Your Engineering Network


Die Vortragsreihe “Train your engineering network” zu vielfältigen Themen des Machine Learnings wendet sich an alle interessierten Personen an der TUHH, von MLE-Partnern sowie allgemein aus der Region Hamburg und zielt darauf ab, den Informationsaustausch und Wissenstransfer zwischen diesen Personen sowie deren Vernetzung in lockerer Atmosphäre zu fördern. Dadurch sollen die Machine-Learning-Aktivitäten innerhalb von MLE, der TUHH sowie im weiteren Umfeld sichtbarer gemacht, Kooperationen gefördert und auch interessierten Studierenden ein Einblick ermöglicht werden.

English version: The presentation series “Train your engineering network” on diverse topics of Machine Learning addresses all interested persons at TUHH, from MLE partners as well as from the Hamburg region in general and aims at promoting the exchange of information and knowledge between these persons as well as their networking in a relaxed atmosphere. Thereby, the machine learning activities within MLE, TUHH and in the wider environment shall be made more visible, cooperations shall be promoted and also interested students shall be given an insight.


Organisatoren sind Mijail Guillemard, Robert Kräuter, Gregor Vonbun-Feldbauer, Jens-Peter M. Zemke.

Ort und Zeit:

Die Vorträge finden im Wintersemester 2023 online über Zoom montags ab 16:00 in englischer Sprache statt. Allgemeiner Zoom Link für alle Vorträge: Link

English version: Lectures will be held online via Zoom on Mondays starting at 16:00 in the winter semester 2023 in English. General zoom link for all lectures: Link

Inhalte und Vortragende im aktuellen Semester:

116.10.2316:00 - 17:00Bernhard BergerMachine Learning in Optimisation with Applications to Material Science (Video)
223.10.2316:00 - 17:00Henning SchwarzComparison of LSTM and Koopman-Operator approaches for Predicting Transient Ditching Loads (Video)
330.10.2316:00 - 17:00Ana AlmeidaMultivariate Time series: Data processing, Imputation and Forecasting (Folien)
406.11.2316:00 - 17:00Alexander ItinAI for engineering and science: selected use cases
513.11.2316:00 - 17:00Yahya SalehFlow-induced bases and application to quantum molecular physics (Video)
620.11.2316:00 - 17:00Sebastian Schibsdat & Denys RomanenkoSelf-acting anomaly detection and quality estimation for semi-automated drilling with machine learning methods (Video)
727.11.2316:00 - 17:00Moritz BraunGeneralizability and explainability of machine learning models for fatigue strength prediction of welded joints (Video)
804.12.2316:00 - 17:00Abdul Qadir IbrahimParareal with a physics informed neural network as a coarse propagator (Video)
-25.12.23--Holiday - Merry Christmas!
-01.01.24--Holiday - Happy New Year!
1108.01.2416:00 - 17:00Frank RöderHindsight Instruction Grounding in Reinforcement Learning (Video)
1215.01.2416:00 - 17:00Lars StietzRefinement of Simulations in Particle Physics (Video)
1322.01.2416:00 - 17:00Emin NakilciogluParameter Efficient Fine Tuning for a Domain-Specific Automatic Speech Recognition (Video)
1429.01.2416:00 - 17:00Robert KräuterDevelopment of a black-box soft sensor for a fluidization process (Video)


  1. Bernhard Berger: Machine Learning in Optimisation with Applications to Material Science.
    Many real-world projects aim at finding optimal solutions to a specific problem and search space. The optimization task can be hard in itself, but often the problem function is not even known. In such cases, it is necessary to experimentally test possible solutions for their appropriateness. In many domains, such as material science, it is expensive and time-consuming to do these tests. Therefore, ML is a technique to bridge this gap and give hints on the performance of a proposed solution. In this talk, I will delve into the problem of surrogate functions, how they can be learned, and how their prediction quality can be used to steer the optimisation process. I will demonstrate this approach using EvoAl, a DSL-based optimisation framework.

  2. Henning Schwarz: Comparison of LSTM and Koopman-Operator approaches for Predicting Transient Ditching Loads.
    This research is concerned with building machine learning (ML) models to predict dynamic ditching loads on aircraft fuselages. The employed learning procedure is structured into two parts, the reconstruction of the spatial loads using a convolutional autoencoder (CAE) and the transient evolution of these loads in a subsequent part. Both parts are simultaneously (jointly) learned in a global network. To predict transient load evolution, the CAE is combined with either different long short-term memory (LSTM) networks or a Koopman-operator based method. To this end, both approaches advance the solution in time based on information from two previous and the present time step. The training data is compiled by applying an extension of the momentum method of von-Karman and Wagner to simulate the loads on a generic DLR-D150 fuselage model at various approach conditions. Results indicate that both baseline methods, i.e., the LSTM and the Koopman-based approach, are able to perform accurate ditching load predictions. Predictive differences occur when looking at the different options to capture the temporal evolution of loads and will be outlined in greater detail.

  3. Ana Almeida: Multivariate Time series: Data processing, Imputation and Forecasting.
    Data is a valuable tool for decision-makers, helping them make informed decisions. We can find multivariate time series in several contexts, such as finances, smart cities, and health. This type of data can bring additional challenges. This presentation will discuss the key concepts and techniques involved in working with multivariate time series data. Specifically, we will focus on the steps of data processing, imputation, and forecasting.

  4. Alexander Itin: AI for engineering and science: selected use cases.
    Selection of works connecting AI, light-matter interactions, and dynamical systems theory will be presented, as well as related problems where AI could help us in the future. Light-matter interactions are considered in photonic crystals and metamaterials, “real” crystals irradiated by lasers, and artificial “crystals of light”. Can we repeatedly drop a laser from the top of the Bremen tower? (and why?) Can we design a particle accelerator on a tip of a pen? Can we make interstellar travels: at least to nearby stars?? These are some of the main questions I hope to consider. In case time allows, I will share my experience on working with Bosch Research and studying at DESY Startup School recently (where we designed a startup that shall be not #LikeABosch, but even better!). Optional questions are: Can AI predict a failure of a coffee machine, a particle accelerator, or the International Space Station? What about predicting a catastrophic earthquake, or collapse of society?

  5. Yahya Saleh: Flow-induced bases and application to quantum molecular physics.
    In analogy to the use of normalizing flows to augment the expressivity of base probability distributions, I propose to augment the expressivity of bases of Hilbert spaces via composition with normalizing flows. I show that the redsulting sequences are also bases of the Hilbert space under sufficient and necessary conditions on the flow. This lays a foundation for a theory of spectral learning, a nonlinear extension of spectral methods for solving differential equations. As an application I solve the vibrational molecular Schrödinger equation. The proposed numerical scheme results in several orders of magnitude increased accuracy over the use of standard spectral methods.

  6. Sebastian Schibsdat & Denys Romanenko: Self-acting anomaly detection and quality estimation for semi-automated drilling with machine learning methods.
    Due to the high number of rivet holes per aircraft produced, automated process monitoring of the drilling process promises a significant reduction in manual inspection. Advances in sensor technology in new machine tools are greatly expanding the data base. Thus, self-learning can be applied to holistic process monitoring.
    In this presentation, the authors present approaches to anomaly detection and quality control in the drilling process. Supervised, semi-supervised and unsupervised methods were used for anomaly detection and compared with classical methods of quality control charts. In addition to engineered feature extraction, a new method was used to extract features using a CNN. For the prediction of the quality of the parts, different methods of classification and regression were compared, giving different results in terms of prediction quality.

  7. Moritz Braun: Generalizability and explainability of machine learning models for fatigue strength prediction of welded joints.
    Fatigue is the main cause of structural failure of large engineering structures. Welds, with their geometry leading to high local stresses, are especially vulnerable. Traditional fatigue assessment methods, which factor in material properties, load levels, and idealized weld geometries, can be inaccurate. To address this, data-driven approaches, using machine learning (ML) algorithms and 3D-laser scanners for weld geometry, have been successful in predicting fatigue life for butt-welded joints; however, it remains uncertain whether these methods are adaptable to different welding techniques and welds with imperfections. This presentation addresses the generalizability of machine learning approaches for fatigue strength assessment for welded joints by assessing data, which differs from the training dataset in various ways. The new data contains results for a different welding procedure, and of welded joints with imperfections and weld defects. By comparing prediction accuracies between the original data and the new data, the study aims to determine the adaptability of the data-driven approach to new, divergent data. The focus is on assessing how anomalous weld geometries impact prediction accuracy, ultimately establishing the limitations of applying this method to varying data. To this goal, explainable artificial intelligence is applied.

  8. Abdul Qadir Ibrahim: Parareal with a physics informed neural network as a coarse propagator.
    Parallel-in-time algorithms provide an additional layer of concurrency for the numerical integration of models based on time-dependent differential equations. Methods like Parareal, which parallelize across multiple time steps, rely on a computationally cheap and coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. Our paper proposes to use a physics-informed neural network (PINN) instead. We demonstrate for the Black-Scholes equation, a partial differential equation from computational finance, that Parareal with a PINN coarse propagator provides better speedup than a numerical coarse propagator. Training and evaluating a neural network are both tasks whose computing patterns are well suited for GPUs. By contrast, mesh-based algorithms with their low computational intensity struggle to perform well. We show that moving the coarse propagator PINN to a GPU while running the numerical fine propagator on the CPU further improves Parareal’s single-node performance. This suggests that integrating machine learning techniques into parallel-in-time integration methods and exploiting their differences in computing patterns might offer a way to better utilize heterogeneous architectures.

  9. n/a

  10. n/a

  11. Frank Röder: Hindsight Instruction Grounding in Reinforcement Learning.
    This presentation addresses the challenge of sample inefficiency in robotic reinforcement learning with sparse rewards and natural language goal representations. We introduce a mechanism for hindsight instruction replay, leveraging expert feedback, and a seq2seq model for generating linguistic hindsight instructions. Remarkably, our findings demonstrate that self-supervised language generation, where the agent autonomously generates linguistic instructions, significantly enhances learning performance. These results underscore the promising potential of hindsight instruction grounding in reinforcement learning for robotics.

  12. Lars Stietz: Refinement of Simulations in Particle Physics.
    In the realm of particle physics, a large amount of data are produced in particle collision experiments such as the CERN Large Hadron Collider (LHC) to explore the subatomic structure of matter. Simulations of the particle collisions are needed to analyse the data recorded at the LHC. These simulations rely on Monte Carlo techniques to handle the high dimensionality of the data. Fast simulation methods (FastSim) have been developed to cope with the significant increase of data that will be produced in the coming years, providing simulated data 10 times faster than the conventional simulation methods (FullSim) at the cost of reduced accuracy. The currently achieved accuracy of FastSim prevents it from replacing FullSim.
    We propose a machine learning approach to refine high level observables reconstructed from FastSim with a regression network inspired from the ResNet approach. We combine the mean squared error (MSE) loss and the maximum mean discrepancy (MMD) loss. The MSE (MMD) compares pairs (ensembles) of data samples. We examine the strengths and weaknesses of each individual loss function and combine them as a Lagrangian optimization problem.

  13. Emin Nakilcioglu: Parameter Efficient Fine Tuning for a Domain-Specific Automatic Speech Recognition.
    With the introduction of early pre-trained language models such as Google’s BERT and various early GPT models, we have seen an ever-increasing excitement and interest in foundation models. To leverage existing pre-trained foundation models and adapt them to specific tasks or domains, these models need to be fine-tuned using domain-specific data. However, fine-tuning can be quite resource-intensive and costly as millions of parameters will be modified as part of training.
    PEFT is a technique designed to fine-tune models while minimizing the need for extensive resources and cost. It achieves this efficiency by freezing some of the layers of the pre-trained model and only fine-tuning the last few layers that are specific to the downstream task. With the help of PEFT, we can achieve a balance between retaining valuable knowledge from the pre-trained model and adapting it effectively to the downstream task with fewer parameters.

  14. Robert Kräuter: Development of a black-box soft sensor for a fluidization process.
    Solids water content is an important particle property in many applications of process engineering. Its importance on the quality of pharmaceutical formulations makes an in-line measurement of the water content especially desirable in fluidization processes. However, currently available measurement techniques are difficult to calibrate and scarcely applicable in real fluidized beds. A promising strategy for in-line monitoring of the water content is thus soft sensing, a method that expresses the targeted quantity as a correlation of other more reliable measurements. In this talk, we present the development of such a soft sensor using various black-box models. Our focus lies on strategies to reduce overfitting through feature engineering and hyperparameter tuning. These models are designed for processing real experimental data from a turbulent process, addressing challenges in data filtering, undersampling, outlier detection, and uncertainty propagation.

Vergangene Semester:

Frühere Aktivitäten

Frühere Aktivitäten sind im Archiv zu finden.