Group Relative Augmentation for Data Efficient Action Detection

๐Ÿ“… 2025-07-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Video-language models (VLMs) struggle to adapt to few-shot action detection due to misalignment between their scene-level pretraining granularity and the fine-grained, person-centric demands of the task, compounded by overfitting under limited supervision. Method: We propose an efficient fine-tuning framework comprising: (i) group-wise relative augmentationโ€”a learnable, diverse feature-space augmentation enhancing generalization; (ii) hybrid adaptation via LoRA-based parameter-efficient tuning and FiLM-based internal feature modulation, enabling fine-grained adaptation while freezing the backbone; and (iii) a dynamic group-weighted loss to mitigate label sparsity and class imbalance. Results: Evaluated on the multi-label, multi-person AVA and MOMA benchmarks, our method achieves significant gains over state-of-the-art few-shot approaches using only minimal annotated samples, demonstrating superior adaptability and robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Adapting large Video-Language Models (VLMs) for action detection using only a few examples poses challenges like overfitting and the granularity mismatch between scene-level pre-training and required person-centric understanding. We propose an efficient adaptation strategy combining parameter-efficient tuning (LoRA) with a novel learnable internal feature augmentation. Applied within the frozen VLM backbone using FiLM, these augmentations generate diverse feature variations directly relevant to the task. Additionally, we introduce a group-weighted loss function that dynamically modulates the training contribution of each augmented sample based on its prediction divergence relative to the group average. This promotes robust learning by prioritizing informative yet reasonable augmentations. We demonstrate our method's effectiveness on complex multi-label, multi-person action detection datasets (AVA, MOMA), achieving strong mAP performance and showcasing significant data efficiency for adapting VLMs from limited examples.
Problem

Research questions and friction points this paper is trying to address.

Adapting VLMs for action detection with few examples
Overcoming granularity mismatch in person-centric understanding
Enhancing data efficiency in multi-person action detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient tuning with LoRA
Learnable internal feature augmentation
Group-weighted loss for robust learning
๐Ÿ”Ž Similar Papers
No similar papers found.