AmorLIP: Efficient Language-Image Pretraining via Amortization

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
CLIP pretraining relies on extremely large batch sizes—often requiring hundreds to thousands of GPUs—to ensure stability in contrastive learning, resulting in prohibitive computational costs; existing mitigation strategies typically compromise downstream performance, training efficiency, or scalability. To address this, we propose Amortized CLIP: a novel framework that introduces a lightweight amortized network and, for the first time, incorporates spectral decomposition of energy-based models into the contrastive objective, yielding a stable and efficient amortized loss. Coupled with an enhanced optimization strategy, our method substantially reduces computational overhead without increasing batch size. Evaluated across 38 zero-shot classification and retrieval benchmarks, Amortized CLIP consistently outperforms standard CLIP, achieving up to a 12.24% relative improvement. It simultaneously advances training efficiency, generalization capability, and large-scale deployability.

Technology Category

Application Category

📝 Abstract
Contrastive Language-Image Pretraining (CLIP) has demonstrated strong zero-shot performance across diverse downstream text-image tasks. Existing CLIP methods typically optimize a contrastive objective using negative samples drawn from each minibatch. To achieve robust representation learning, these methods require extremely large batch sizes and escalate computational demands to hundreds or even thousands of GPUs. Prior approaches to mitigate this issue often compromise downstream performance, prolong training duration, or face scalability challenges with very large datasets. To overcome these limitations, we propose AmorLIP, an efficient CLIP pretraining framework that amortizes expensive computations involved in contrastive learning through lightweight neural networks, which substantially improves training efficiency and performance. Leveraging insights from a spectral factorization of energy-based models, we introduce novel amortization objectives along with practical techniques to improve training stability. Extensive experiments across 38 downstream tasks demonstrate the superior zero-shot classification and retrieval capabilities of AmorLIP, consistently outperforming standard CLIP baselines with substantial relative improvements of up to 12.24%.
Problem

Research questions and friction points this paper is trying to address.

Reduces high computational demands in CLIP training
Improves efficiency without compromising downstream performance
Enhances zero-shot classification and retrieval capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Amortizes contrastive learning with lightweight networks
Introduces novel amortization objectives for stability
Improves efficiency and performance in CLIP pretraining
🔎 Similar Papers
No similar papers found.