🤖 AI Summary
This work addresses the limitations of existing few-shot action recognition approaches that rely on suboptimal feature-caption-feature pipelines and perform metric learning solely in visual space. To overcome these constraints, we propose FSAR-LLaVA, the first end-to-end framework that leverages a multimodal large language model (MLLM) as a knowledge base for this task. Our method extracts spatiotemporal and semantically enriched features via the MLLM decoder and introduces a multimodal feature enhancement module, a composite task-oriented prototype construction mechanism, and a training-free multimodal prototype matching metric. Coupled with task-adaptive prompt engineering, our approach effectively bridges the distribution gap between meta-training and meta-testing phases. Extensive experiments demonstrate significant performance gains across multiple few-shot action recognition benchmarks, achieved with only a minimal number of trainable parameters.
📝 Abstract
Multimodal Large Language Models (MLLMs) have propelled the field of few-shot action recognition (FSAR). However, preliminary explorations in this area primarily focus on generating captions to form a suboptimal feature->caption->feature pipeline and adopt metric learning solely within the visual space. In this paper, we propose FSAR-LLaVA, the first end-to-end method to leverage MLLMs (such as Video-LLaVA) as a multimodal knowledge base for directly enhancing FSAR. First, at the feature level, we leverage the MLLM's multimodal decoder to extract spatiotemporally and semantically enriched representations, which are then decoupled and enhanced by our Multimodal Feature-Enhanced Module into distinct visual and textual features that fully exploit their semantic knowledge for FSAR. Next, we leverage the versatility of MLLMs to craft input prompts that flexibly adapt to diverse scenarios, and use their aligned outputs to drive our designed Composite Task-Oriented Prototype Construction, effectively bridging the distribution gap between meta-train and meta-test sets. Finally, to enable multimodal features to guide metric learning jointly, we introduce a training-free Multimodal Prototype Matching Metric that adaptively selects the most decisive cues and efficiently leverages the decoupled feature representations produced by MLLMs. Extensive experiments demonstrate superior performance across various tasks with minimal trainable parameters.