🤖 AI Summary
This work addresses the heavy reliance of CLIP-style models on extensive human annotations for fine-grained or complex categories. To reduce dependence on costly manual supervision, we propose a weak-model-supervised strong-model classification paradigm. Our core method, Class Prototype Learning (CPL), leverages pseudo-labels generated by a lightweight weak model to construct robust and transferable class prototypes, which are then refined via contrastive learning to enhance vision-language alignment. Crucially, CPL is the first approach to successfully extend weak-to-strong generalization to multimodal vision-language settings. Extensive experiments under constrained regimes—including few-shot classification and low-resource pretraining—demonstrate that CPL consistently outperforms strong baselines, achieving an average accuracy gain of 3.67% across benchmarks. The improvement is particularly pronounced under annotation scarcity, validating CPL’s efficacy in low-supervision scenarios.
📝 Abstract
Aligning large-scale commercial models with user intent is crucial to preventing harmful outputs. Current methods rely on human supervision but become impractical as model complexity increases. When models surpass human knowledge, providing accurate feedback becomes challenging and inefficient. A novel solution proposed recently is using a weaker model to supervise a stronger model. This concept leverages the ability of weaker models to perform evaluations, thereby reducing the workload on human supervisors. Previous work has shown the effectiveness of weak-to-strong generalization in the context of language-only models. Extending this concept to vision-language models leverages these insights, adapting the proven benefits to a multi-modal context. In our study, we explore weak-to-strong generalization for CLIP-based classification. We propose a method, class prototype learning (CPL), which aims to enhance the classification capabilities of the CLIP model, by learning more representative prototypes for each category. Our findings indicate that, despite using a simple loss function under weak supervision, CPL yields robust improvements in targeted scenarios, particularly when pretraining is limited. Extensive experiments demonstrate that our approach is effective under these settings, achieving a 3.67% improvement over strong baseline methods.