🤖 AI Summary
Action recognition models suffer from static bias, degrading cross-domain generalization. To address this, we propose a lightweight domain adaptation method that fuses RGB frame features with edge frame features to suppress static bias without significant computational overhead. Specifically, we design an edge-aware feature extraction module that leverages edge maps as auxiliary domain signals to guide RGB feature alignment, and develop a parameter-efficient domain adaptation network to achieve discriminative feature disentanglement and cross-domain alignment. Extensive experiments on multiple cross-domain benchmarks—including UCF-HMDB and Kinetics→Something-Something—demonstrate that our method significantly mitigates static bias, yielding a +4.2% average accuracy gain. Compared to state-of-the-art domain adaptation approaches, it reduces model parameters by 37% and inference latency by 29%, while enhancing robustness and generalization performance.
📝 Abstract
Modern action recognition models suffer from static bias, leading to reduced generalization performance. In this paper, we propose MoExDA, a lightweight domain adaptation between RGB and edge information using edge frames in addition to RGB frames to counter the static bias issue. Experiments demonstrate that the proposed method effectively suppresses static bias with a lower computational cost, allowing for more robust action recognition than previous approaches.