$x^2$-Fusion: Cross-Modality and Cross-Dimension Flow Estimation in Event Edge Space

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of jointly estimating 2D optical flow and 3D scene flow from heterogeneous multimodal inputs—images, LiDAR, and event data—whose disparate feature spaces hinder unified representation. To overcome this, the authors propose constructing a homogeneous representation space anchored by event-derived edge fields, into which image and LiDAR features are aligned. They further introduce a reliability-aware adaptive fusion mechanism and a cross-dimensional contrastive learning strategy to enable effective cross-modal alignment and coupled optimization of 2D and 3D motion. The method achieves state-of-the-art accuracy on both synthetic and real-world datasets, demonstrating particularly significant improvements over existing approaches in complex, degraded scenarios such as low-light or high-speed conditions.

Technology Category

Application Category

📝 Abstract
Estimating dense 2D optical flow and 3D scene flow is essential for dynamic scene understanding. Recent work combines images, LiDAR, and event data to jointly predict 2D and 3D motion, yet most approaches operate in separate heterogeneous feature spaces. Without a shared latent space that all modalities can align to, these systems rely on multiple modality-specific blocks, leaving cross-sensor mismatches unresolved and making fusion unnecessarily complex.Event cameras naturally provide a spatiotemporal edge signal, which we can treat as an intrinsic edge field to anchor a unified latent representation, termed the Event Edge Space. Building on this idea, we introduce $x^2$-Fusion, which reframes multimodal fusion as representation unification: event-derived spatiotemporal edges define an edge-centric homogeneous space, and image and LiDAR features are explicitly aligned in this shared representation.Within this space, we perform reliability-aware adaptive fusion to estimate modality reliability and emphasize stable cues under degradation. We further employ cross-dimension contrast learning to tightly couple 2D optical flow with 3D scene flow. Extensive experiments on both synthetic and real benchmarks show that $x^2$-Fusion achieves state-of-the-art accuracy under standard conditions and delivers substantial improvements in challenging scenarios.
Problem

Research questions and friction points this paper is trying to address.

optical flow
scene flow
multimodal fusion
event camera
cross-modality alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event Edge Space
Cross-Modality Fusion
Cross-Dimension Flow Estimation
Reliability-Aware Fusion
Contrastive Learning
🔎 Similar Papers
No similar papers found.