๐ค AI Summary
Video-language models (VLMs) struggle to adapt to few-shot action detection due to misalignment between their scene-level pretraining granularity and the fine-grained, person-centric demands of the task, compounded by overfitting under limited supervision.
Method: We propose an efficient fine-tuning framework comprising: (i) group-wise relative augmentationโa learnable, diverse feature-space augmentation enhancing generalization; (ii) hybrid adaptation via LoRA-based parameter-efficient tuning and FiLM-based internal feature modulation, enabling fine-grained adaptation while freezing the backbone; and (iii) a dynamic group-weighted loss to mitigate label sparsity and class imbalance.
Results: Evaluated on the multi-label, multi-person AVA and MOMA benchmarks, our method achieves significant gains over state-of-the-art few-shot approaches using only minimal annotated samples, demonstrating superior adaptability and robustness.
๐ Abstract
Adapting large Video-Language Models (VLMs) for action detection using only a few examples poses challenges like overfitting and the granularity mismatch between scene-level pre-training and required person-centric understanding. We propose an efficient adaptation strategy combining parameter-efficient tuning (LoRA) with a novel learnable internal feature augmentation. Applied within the frozen VLM backbone using FiLM, these augmentations generate diverse feature variations directly relevant to the task. Additionally, we introduce a group-weighted loss function that dynamically modulates the training contribution of each augmented sample based on its prediction divergence relative to the group average. This promotes robust learning by prioritizing informative yet reasonable augmentations. We demonstrate our method's effectiveness on complex multi-label, multi-person action detection datasets (AVA, MOMA), achieving strong mAP performance and showcasing significant data efficiency for adapting VLMs from limited examples.