HyperPointFormer: Multimodal Fusion in 3D Space with Dual-Branch Cross-Attention Transformers

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing remote sensing multimodal classification methods typically rasterize LiDAR/photogrammetric 3D point clouds into 2D representations, leading to geometric information loss, inability to directly model spatial structure, and difficulty in generating native 3D semantic predictions. To address this, we propose the first end-to-end 3D point-level spectral–geometric multimodal fusion framework tailored for urban scenes. Our approach abandons 2D projection and instead introduces a dual-branch Transformer architecture with full 3D cross-modal cross-attention, enabling multi-scale, point-wise joint learning of geometric and spectral features. It produces spatially consistent, purely 3D semantic segmentation outputs while supporting distortion-free 2D map projection. Evaluated on three major benchmarks—DFC2018, ISPRS Vaihingen 3D, and DFC2019—our method achieves state-of-the-art or comparable accuracy to the best 2D-based approaches, significantly enhancing 3D perception capability and prediction consistency.

Technology Category

Application Category

📝 Abstract
Multimodal remote sensing data, including spectral and lidar or photogrammetry, is crucial for achieving satisfactory land-use / land-cover classification results in urban scenes. So far, most studies have been conducted in a 2D context. When 3D information is available in the dataset, it is typically integrated with the 2D data by rasterizing the 3D data into 2D formats. Although this method yields satisfactory classification results, it falls short in fully exploiting the potential of 3D data by restricting the model's ability to learn 3D spatial features directly from raw point clouds. Additionally, it limits the generation of 3D predictions, as the dimensionality of the input data has been reduced. In this study, we propose a fully 3D-based method that fuses all modalities within the 3D point cloud and employs a dedicated dual-branch Transformer model to simultaneously learn geometric and spectral features. To enhance the fusion process, we introduce a cross-attention-based mechanism that fully operates on 3D points, effectively integrating features from various modalities across multiple scales. The purpose of cross-attention is to allow one modality to assess the importance of another by weighing the relevant features. We evaluated our method by comparing it against both 3D and 2D methods using the 2018 IEEE GRSS Data Fusion Contest (DFC2018) dataset. Our findings indicate that 3D fusion delivers competitive results compared to 2D methods and offers more flexibility by providing 3D predictions. These predictions can be projected onto 2D maps, a capability that is not feasible in reverse. Additionally, we evaluated our method on different datasets, specifically the ISPRS Vaihingen 3D and the IEEE 2019 Data Fusion Contest. Our code will be published here: https://github.com/aldinorizaldy/hyperpointformer.
Problem

Research questions and friction points this paper is trying to address.

Fusing multimodal remote sensing data in 3D space for better classification
Overcoming limitations of 2D rasterization in exploiting 3D spatial features
Enhancing cross-modality feature integration using 3D-based cross-attention transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses modalities within 3D point clouds
Uses dual-branch Transformer for feature learning
Introduces cross-attention for 3D feature integration
🔎 Similar Papers
No similar papers found.
A
Aldino Rizaldy
Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Helmholtz Institute Freiberg for Resource Technology (HIF), 09599 Freiberg, Germany, and also with the Freie Universität Berlin, Department of Remote Sensing and Geoinformation, 12249 Berlin, Germany
Richard Gloaguen
Richard Gloaguen
Head department Exploration Technology, Helmholtz-Zentrum Dresden-Rossendorf
hyperspectral imagingmachine learningcomputer visionRPASremote sensing
F
F. E. Fassnacht
Freie Universität Berlin, Department of Remote Sensing and Geoinformation, 12249 Berlin, Germany
Pedram Ghamisi
Pedram Ghamisi
HZDR & Lancaster University, Group Leader and Professor
Earth ObservationDeep LearningAI4EOResponsible AIRemote Sensing