🤖 AI Summary
This work addresses the challenge of collaborative perception among heterogeneous agents, where modality isolation—stemming from the absence of co-occurring modalities in training data—exacerbates cross-modal domain gaps and impedes effective cooperation. To overcome this limitation, the authors propose CodeAlign, a novel framework that achieves efficient modality alignment without requiring spatially co-located supervision. CodeAlign introduces a feature-codebook-feature (FCF) translation mechanism coupled with codebook regularization to explicitly enforce cross-modal representation consistency, thereby constructing a compact yet expressive shared semantic space. Notably, the method eliminates reliance on spatially overlapping observations and attains state-of-the-art performance on both OPV2V and DAIR-V2X benchmarks, using only 8% of the parameters of prior approaches and reducing communication overhead by a factor of 1024.
📝 Abstract
Collaborative perception leverages data exchange among multiple agents to enhance overall perception capabilities. However, heterogeneity across agents introduces domain gaps that hinder collaboration, and this is further exacerbated by an underexplored issue: modality isolation. It arises when multiple agents with different modalities never co-occur in any training data frame, enlarging cross-modal domain gaps. Existing alignment methods rely on supervision from spatially overlapping observations, thus fail to handle modality isolation. To address this challenge, we propose CodeAlign, the first efficient, co-occurrence-free alignment framework that smoothly aligns modalities via cross-modal feature-code-feature(FCF) translation. The key idea is to explicitly identify the representation consistency through codebook, and directly learn mappings between modality-specific feature spaces, thereby eliminating the need for spatial correspondence. Codebooks regularize feature spaces into code spaces, providing compact yet expressive representations. With a prepared code space for each modality, CodeAlign learns FCF translations that map features to the corresponding codes of other modalities, which are then decoded back into features in the target code space, enabling effective alignment. Experiments show that, when integrating three modalities, CodeAlign requires only 8% of the training parameters of prior alignment methods, reduces communication load by 1024x, and achieves state-of-the-art perception performance on both OPV2V and DAIR-V2X dataset. Code will be released on https://github.com/cxliu0314/CodeAlign.