Move What Matters: Parameter-Efficient Domain Adaptation via Optimal Transport Flow for Collaborative Perception

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation and training instability of existing parameter-efficient fine-tuning methods in cross-domain deployment of multi-agent V2X cooperative perception, which stem from inter-frame redundancy and deep semantic degradation in heterogeneous perception streams. To tackle these challenges, we propose FlowAdapt, a novel framework that introduces optimal transport theory to this domain for the first time. FlowAdapt employs Wasserstein greedy sampling to eliminate redundant samples and incorporates a progressive knowledge transfer module that injects compressed shallow representations into deeper layers to mitigate semantic degradation. Requiring only 1% trainable parameters, our method significantly enhances cross-domain generalization and sample efficiency across three benchmarks, achieving an effective balance between semantic fidelity and computational efficiency.

Technology Category

Application Category

📝 Abstract
Fast domain adaptation remains a fundamental challenge for deploying multi-agent systems across diverse environments in Vehicle-to-Everything (V2X) collaborative perception. Despite the success of Parameter-Efficient Fine-Tuning (PEFT) in natural language processing and conventional vision tasks, directly applying PEFT to multi-agent settings leads to significant performance degradation and training instability. In this work, we conduct a detailed analysis and identify two key factors: (i) inter-frame redundancy in heterogeneous sensory streams, and (ii) erosion of fine-grained semantics in deep-layer representations under PEFT adaptation. To address these issues, we propose FlowAdapt, a parameter-efficient framework grounded in optimal transport theory, which minimizes information transport costs across both data distributions and network hierarchies. Specifically, we introduce a Wasserstein Greedy Sampling strategy to selectively filter redundant samples via a bounded covering radius. Furthermore, Progressive Knowledge Transfer module is designed to progressively inject compressed early-stage representations into later stages through learnable pathways, alleviating semantic degradation in late-stage adaptation. Extensive experiments on three benchmarks demonstrate that FlowAdapt achieves state-of-the-art performance with only 1% of trainable parameters, effectively bridging domain gaps with superior sample efficiency and generalization.
Problem

Research questions and friction points this paper is trying to address.

domain adaptation
collaborative perception
parameter-efficient fine-tuning
multi-agent systems
semantic degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-Efficient Domain Adaptation
Optimal Transport
Collaborative Perception
Wasserstein Sampling
Progressive Knowledge Transfer
🔎 Similar Papers
No similar papers found.
Z
Zesheng Jia
School of Future Science and Engineering, Soochow University
J
Jin Wang
School of Future Science and Engineering, Soochow University
S
Siao Liu
School of Future Science and Engineering, Soochow University
Lingzhi Li
Lingzhi Li
Tongji University
Fire resistanceStrengthening and retrofitHigh performance concrete materials
Ziyao Huang
Ziyao Huang
Institude of Computing Technology, CAS
Computer Vision
Y
Yunjiang Xu
City University of Hong Kong
Jianping Wang
Jianping Wang
Fellow of IEEE, Fellow of AAIA, Chair Professor, City University of Hong Kong
Autonomous DrivingEdge ComputingCloud ComputingNetworking