🤖 AI Summary
This work addresses the challenge of efficiently compressing and representing Earth observation data, which exhibit significant heterogeneity due to diverse sensor types and variable spectral channel configurations. To overcome this, the authors propose EO-VAE, a unified tokenizer based on a variational autoencoder architecture augmented with a dynamic hypernetwork. EO-VAE is the first model capable of flexibly encoding and reconstructing imagery with arbitrary combinations of spectral channels within a single framework, thereby eliminating the need for modality-specific training regimes. This approach substantially improves representation efficiency compared to conventional methods. Evaluated on the TerraMesh dataset, EO-VAE achieves superior reconstruction fidelity relative to the existing TerraMind tokenizer, establishing a more general and effective foundation for generative modeling in remote sensing.
📝 Abstract
State-of-the-art generative image and video models rely heavily on tokenizers that compress high-dimensional inputs into more efficient latent representations. While this paradigm has revolutionized RGB generation, Earth observation (EO) data presents unique challenges due to diverse sensor specifications and variable spectral channels. We propose EO-VAE, a multi-sensor variational autoencoder designed to serve as a foundational tokenizer for the EO domain. Unlike prior approaches that train separate tokenizers for each modality, EO-VAE utilizes a single model to encode and reconstruct flexible channel combinations via dynamic hypernetworks. Our experiments on the TerraMesh dataset demonstrate that EO-VAE achieves superior reconstruction fidelity compared to the TerraMind tokenizers, establishing a robust baseline for latent generative modeling in remote sensing.