Mamba Learns in Context: Structure-Aware Domain Generalization for Multi-Task Point Cloud Understanding

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing Transformer and Mamba models in multi-task domain generalization, which often suffer from structural distortion, unstable sequence modeling, and degraded cross-domain performance. To overcome these challenges, we propose SADG, a Mamba-based in-context learning framework that preserves cross-domain structural consistency in point clouds through Structure-Aware Serialization (SAS) and Hierarchical Domain-aware Modeling (HDM). Additionally, SADG introduces a parameter-free Spectral Graph Alignment (SGA) mechanism at test time, enabling fine-tuning-free feature adaptation. Our approach pioneers the SAS strategy and a lightweight SGA module to support viewpoint-invariant sequential modeling. We also present MP3DObject, the first real-scanned multi-task domain generalization benchmark for point clouds. Extensive experiments demonstrate that SADG significantly outperforms current methods in reconstruction, denoising, and registration tasks, achieving superior structural fidelity and cross-domain generalization.

Technology Category

Application Category

📝 Abstract
While recent Transformer and Mamba architectures have advanced point cloud representation learning, they are typically developed for single-task or single-domain settings. Directly applying them to multi-task domain generalization (DG) leads to degraded performance. Transformers effectively model global dependencies but suffer from quadratic attention cost and lack explicit structural ordering, whereas Mamba offers linear-time recurrence yet often depends on coordinate-driven serialization, which is sensitive to viewpoint changes and missing regions, causing structural drift and unstable sequential modeling. In this paper, we propose Structure-Aware Domain Generalization (SADG), a Mamba-based In-Context Learning framework that preserves structural hierarchy across domains and tasks. We design structure-aware serialization (SAS) that generates transformation-invariant sequences using centroid-based topology and geodesic curvature continuity. We further devise hierarchical domain-aware modeling (HDM) that stabilizes cross-domain reasoning by consolidating intra-domain structure and fusing inter-domain relations. At test time, we introduce a lightweight spectral graph alignment (SGA) that shifts target features toward source prototypes in the spectral domain without updating model parameters, ensuring structure-preserving test-time feature shifting. In addition, we introduce MP3DObject, a real-scan object dataset for multi-task DG evaluation. Comprehensive experiments demonstrate that the proposed approach improves structural fidelity and consistently outperforms state-of-the-art methods across multiple tasks including reconstruction, denoising, and registration.
Problem

Research questions and friction points this paper is trying to address.

multi-task domain generalization
point cloud understanding
structural drift
sequence modeling instability
structure-aware representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mamba
Structure-Aware Serialization
Domain Generalization
In-Context Learning
Spectral Graph Alignment
🔎 Similar Papers
No similar papers found.