© TUHH, Martin Kunze

Lehre

Train Your Engineering Network

Ziel:

Die Vortragsreihe “Train your engineering network” zu vielfältigen Themen des Machine Learnings wendet sich in erster Linie an die wissenschaftlichen Mitarbeiter*innen der TUHH sowie allgemein in der Region Hamburg und zielt darauf ab, den Informationsaustausch zwischen diesen Personen sowie deren Vernetzung in lockerer Atmosphäre zu fördern. Dadurch sollen die Machine-Learning-Aktivitäten innerhalb der TUHH sowie in deren Umfeld sichtbarer gemacht, Kooperationen gefördert und auch interessierten Studierenden ein Einblick ermöglicht werden.

Ansprechpartner:

Organisatoren sind Mijail Guillemard, Gregor Vonbun-Feldbauer, Jens-Peter M. Zemke.

Ort und Zeit:

Die Vorträge finden aktuell im Sommersemester 2022 online montags ab 16:00 und je nach Ankündigung (siehe Vortragstitel) in deutscher oder englischer Sprache statt.

Inhalte und Vortragende im aktuellen Semester:

#DatumZeitVortragende/rThema
125.04.2216:00 - 17:00Florian SchneiderTransformer Models for Cross-Modal Text-Image Retrieval (Video)
202.05.2216:00 - 17:00Lina FesefeldtSecond Order Information in Neural Network Training
309.05.22---
416.05.2216:00 - 17:00Daniel StolpmannDistributed Reinforcement Learning on a High Performance Cluster Using Ray
-23.05.22--Holiday
530.05.2216:00 - 17:00Cornelia HofsäßPulmonary Embolus Detection with Dual-Energy CT Data Augmentation
-06.06.22--Holiday
613.06.22---
720.06.2216:00 - 17:00Mijail GuillemardTangential bundles, curvature and persistent homology for material defect detection
827.06.2216:00 - 17:00N.N.N.N.
904.07.2216:00 - 17:00Trishita BanerjeeMachine Learning in Security
1011.07.2216:00 - 17:00Denys RomanenkoHolistic process monitoring with machine learning classification methods using internal machine sensors for semi-automatic drilling

Abstracts:

  1. Florian Schneider: Transformer Models for Cross-Modal Text-Image Retrieval.
    When the Transformer architecture was first introduced, its target research domain was Natural Language Processing, where the famous BERT model pushed the boundaries in several downstream tasks. Motivated by these breakthroughs, other research fields like Computer Vision started to leverage Transformers with great success.
    In state-of-the-art cross-modal text-image retrieval, where images are searched via textual queries, the acquired knowledge from both fields is combined to significantly improve the quality of retrieved images. Further, employing Transformer models enables computing not only global text-image similarity but also fine-granular word-region alignments, which allows in-image search, useful for many real world applications.

  2. Lina Fesefeldt: Second Order Information in Neural Network Training.
    When choosing an optimizer for your neural network, you will probably consider methods that are based only on first order derivatives, like Stochastic Gradient Descent or Adam. But recent research suggests that second order optimizers are also applicable for neural network training. This is possible via implicit Hessian-vector products.
    This talk tackles the problem of minimizing your cost function from a mathematical point of view. While talking about the use of second order optimizers in neural network training, we will stumble across some interesting properties of neural networks and their cost functions.

  3. N/A

  4. Daniel Stolpmann: Distributed Reinforcement Learning on a High Performance Cluster Using Ray.
    Ray is a Python framework for developing distributed applications. While not limited to it, it has a strong focus on Machine Learning and supports it with a variety of included libraries. For example, Ray RLlib provides implementations of many common Reinforcement Learning algorithms, which reduces development time and allows the user to quickly compare different approaches to best solve the problem at hand. As Reinforcement Learning algorithms can be very sensitive to the choice of hyperparameters, Ray Tune can be used to tune them via optimization-based methods. While this typically requires a large number of simulation runs, Ray can speed up the process by running them in parallel on multiple processors or machines.
    This talk gives an introduction to Ray, its libraries and how they can be used to seamlessly scale Reinforcement Learning applications from a laptop to the High Performance Cluster (HPC) of the TUHH.

  5. Cornelia Hofsäß: Pulmonary Embolus Detection with Dual-Energy CT Data Augmentation.
    3D segmentation U-Nets are trained for pulmonary embolus (PE) detection on three different data sets. We investigate the impact of the training data set on the generalization capabilities and use dual-energy CT data augmentation to increase performance.

  6. N/A

  7. Mijail Guillemard: Tangential bundles, curvature and persistent homology for material defect detection.
    An application to the detection of material defects using persistent homology is presented. We combine tangent bundles and curvature with persistent homology in the context of machine learning.

  8. N.N.: N.N.
    N.N.

  9. Trishita Banerjee: Machine Learning in Security.
    Tba

  10. Denys Romanenko: Holistic process monitoring with machine learning classification methods using internal machine sensors for semi-automatic drilling.
    Since one third of rivet holes during aircraft assembly are produced with semi-automatic drilling units, in this work reliable and efficient methods for process state prediction using Machine Learning (ML) classification methods were developed for this application. Process states were holistically varied in the experiments, gathering motor current and machine vibration data. These data were used as input to identify the optimal combination of five data feature preparation and nine ML methods for process state prediction. K-nearest-neighbour, decision tree and artificial neural network models provided reliable predictions of the process states: workpiece material, rotational speed, feed, peck-feed amplitude and lubrication state. Data preprocessing through sequential feature selection and principal components analysis proved to be favourably for these applications. The prediction of the workpiece clamping distance revealed frequent misclassifications and thus, was not reliable.

Vergangene Semester: