A unified any-to-any generation model that translates between heterogeneous lunar image modalities using a single transformer architecture.
Overview
The lunar surface is observed through a wide variety of remote sensing instruments — reflectance maps, albedo images, thermal emission, topography (DTMs), slope maps, and more. Each modality captures different physical properties, yet not all are simultaneously available for any given region. This project presents a single, unified Transformer capable of translating any available input modality (or combination thereof) into any target modality, enabling virtual data synthesis for data-scarce scenarios.
Key Contributions
Supported Modalities
Enhancing the search for anomalies on the Moon by evaluating state-of-the-art machine learning architectures against actual Apollo landing sites.
Overview
Uncovering anomalies on the lunar surface is a critical step in exploring the Moon's geological history. Finding these unique data points often requires slow and biased manual inspections by domain experts. This research automates the search for technosignatures by using confirmed Apollo landing sites to ground truth and benchmark three state-of-the-art deep learning algorithms.
Key Contributions
Technologies
More projects coming soon.