🤖 AI Summary
This work addresses the challenge of poor generalization in multimodal large language models (MLLMs) within specialized video domains—such as industrial and surgical settings—where annotated data are scarce. To overcome this limitation, the authors propose a label-efficient framework that enables effective in-context learning using only minimal expert annotations alongside abundant unlabeled videos. The approach introduces three key innovations: density-uncertainty weighted sampling to filter visual outliers, a confidence-aware pseudo-label filtering mechanism, and a retrieval-and-prompting strategy powered by a hybrid demonstration pool. Evaluated across nine diverse video benchmarks and four mainstream MLLMs, the method consistently outperforms existing techniques, achieving robust cross-domain adaptation with extremely low annotation costs.
📝 Abstract
Generalizing Multimodal Large Language Models (MLLMs) to novel video domains is essential for real-world deployment but remains challenging due to the scarcity of labeled data. While In-Context Learning (ICL) offers a training-free adaptation path, standard methods rely on large annotated pools, which are often impractical in specialized environments like industrial or surgical settings since they require the experts'annotations. To bridge this gap, we introduce VIOLA (Video In-cOntext Learning with minimal Annotation), a label-efficient framework that synergizes minimal expert supervision with abundant unlabeled data. First, to maximize the efficiency of a strict annotation budget, we propose density-uncertainty-weighted sampling. Unlike standard diversity or uncertainty strategies that risk selecting visual outliers, our method leverages density estimation to identify samples that are simultaneously diverse, representative, and informative. Second, to utilize the remaining unlabeled data without noise propagation, we construct a hybrid pool and introduce confidence-aware retrieval and confidence-aware prompting. These mechanisms explicitly model label reliability, retrieving demonstrations based on a composite score of similarity and confidence while enabling the MLLM to adaptively distinguish between verified ground truths and noisy pseudo-labels. Extensive experiments across nine diverse benchmarks using four MLLMs demonstrate that our framework significantly outperforms various baselines in low-resource settings, achieving robust adaptation with minimal annotation costs.