NEWS! NEWS! NEWS! This website is under construction. Next up: Papers section.
Currently, I am working at the TU Dortmund University as a researcher in the Image Analysis group. I am heading to obtain my Doctor of Engineering in the field of machine learning (AI). If you want, you can contact me anytime via the contact form linked below or through social media.
Experience: 5 years
Experience: 5 years
Experience: 3 years
Experience: 6 years
I apply machine learning to analyze remote sensing data, specializing in anomaly detection and uncovering multi-modal correlations. My work leverages transformer models to develop foundational models that interpret complex patterns across diverse remote-sensing datasets.
For example, I recently published research on landing sites and techno-signature detection in lunar images. This work utilized state-of-the-art anomaly detection methods, such as PatchCore and AnoVit, to identify these signatures within large-scale lunar surface datasets
(GitHub).
I have also developed a multi-modal conversion model for grayscale images, normal maps, digital height maps, and albedo maps into each other. This work has demonstrated the potential for discovering valuable correlations between these diverse data types (To be presented
at LPSC in March).
(For further projects, you can go to the Papers / Project page).
1TU Dortmund University, Image Analysis Group, Dortmund, Germany
Multimodal learning is an emerging research topic across multiple disciplines but has rarely been applied to planetary science. In this contribution, we identify that reflectance parameter estimation and image-based 3D reconstruction of lunar images can be formulated as a multimodal learning problem. We propose a single, unified transformer architecture trained to learn shared representations between multiple sources like grayscale images, digital elevation models, surface normals, and albedo maps. The architecture supports flexible translation from any input modality to any target modality. Predicting DEMs and albedo maps from grayscale images simultaneously solves the task of 3D reconstruction of planetary surfaces and disentangles photometric parameters and height information. Our results demonstrate that our foundation model learns physically plausible relations across these four modalities. Adding more input modalities in the future will enable tasks such as photometric normalization and co-registration.
multimodal lunar surface digital elevation model (DEM) foundation model 3D reconstruction deep-learning any-to-any height and gradient
"Unlocking the potential of remote sensing data to cultivate a deeper understanding of our ever-changing world."