NeuCLIP: Efficient Large-Scale CLIP Training with Neural Normalizer Optimization

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Accurately estimating the normalization term (partition function) in contrastive loss remains a critical bottleneck for large-scale CLIP training. To address this, we propose NeuCLIP—the first method to model the per-sample log-normalization term as the output of a compact, learnable neural network. Leveraging convex analysis, we reformulate the contrastive loss and integrate variational inference to transform high-dimensional normalization optimization into end-to-end neural network learning—eliminating traditional block-coordinate updates and their associated optimization bias while enhancing training stability. We design a lightweight auxiliary network jointly optimized with the main model and introduce dedicated acceleration strategies. Experiments on datasets ranging from millions to billions of samples demonstrate that NeuCLIP significantly improves normalization estimation accuracy under identical computational budgets, achieving state-of-the-art performance in both image–text retrieval and zero-shot transfer tasks.

Technology Category

Application Category

📝 Abstract
Accurately estimating the normalization term (also known as the partition function) in the contrastive loss is a central challenge for training Contrastive Language-Image Pre-training (CLIP) models. Conventional methods rely on large batches for approximation, demanding substantial computational resources. To mitigate this issue, prior works introduced per-sample normalizer estimators, which are updated at each epoch in a blockwise coordinate manner to keep track of updated encoders. However, this scheme incurs optimization error that scales with the ratio of dataset size to batch size, limiting effectiveness for large datasets or small batches. To overcome this limitation, we propose NeuCLIP, a novel and elegant optimization framework based on two key ideas: (i) $ extbf{reformulating}$ the contrastive loss for each sample $ extbf{via convex analysis}$ into a minimization problem with an auxiliary variable representing its log-normalizer; and (ii) $ extbf{transforming}$ the resulting minimization over $n$ auxiliary variables (where $n$ is the dataset size) via $ extbf{variational analysis}$ into the minimization over a compact neural network that predicts the log-normalizers. We design an alternating optimization algorithm that jointly trains the CLIP model and the auxiliary network. By employing a tailored architecture and acceleration techniques for the auxiliary network, NeuCLIP achieves more accurate normalizer estimation, leading to improved performance compared with previous methods. Extensive experiments on large-scale CLIP training, spanning datasets from millions to billions of samples, demonstrate that NeuCLIP outperforms previous methods.
Problem

Research questions and friction points this paper is trying to address.

Accurately estimating normalization term in CLIP contrastive loss
Overcoming computational limitations of large batch approximations
Reducing optimization error for large datasets or small batches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reformulates contrastive loss via convex analysis
Transforms auxiliary variables using variational analysis
Employs neural network for log-normalizer prediction
🔎 Similar Papers
No similar papers found.