🤖 AI Summary
This work proposes the Conformal Cross-Modal Acquisition (CCMA) framework to reduce annotation costs and improve data efficiency in active learning. CCMA uniquely integrates conformal prediction with cross-modal knowledge transfer by leveraging a pretrained vision-language model as a teacher within a teacher–student architecture. The teacher provides the purely visual student model with semantic-aligned and conformally calibrated uncertainty estimates, which are combined with a diversity-aware strategy to select the most informative samples. By jointly exploiting calibrated uncertainty and sample diversity, CCMA transcends conventional active learning paradigms that rely solely on either uncertainty or diversity heuristics. Extensive experiments demonstrate that CCMA consistently outperforms existing methods across multiple benchmarks, achieving more efficient and reliable active learning performance.
📝 Abstract
Foundation models for vision have transformed visual recognition with powerful pretrained representations and strong zero-shot capabilities, yet their potential for data-efficient learning remains largely untapped. Active Learning (AL) aims to minimize annotation costs by strategically selecting the most informative samples for labeling, but existing methods largely overlook the rich multimodal knowledge embedded in modern vision-language models (VLMs). We introduce Conformal Cross-Modal Acquisition (CCMA), a novel AL framework that bridges vision and language modalities through a teacher-student architecture. CCMA employs a pretrained VLM as a teacher to provide semantically grounded uncertainty estimates, conformally calibrated to guide sample selection for a vision-only student model. By integrating multimodal conformal scoring with diversity-aware selection strategies, CCMA achieves superior data efficiency across multiple benchmarks. Our approach consistently outperforms state-of-the-art AL baselines, demonstrating clear advantages over methods relying solely on uncertainty or diversity metrics.