Post-training quantization of vision encoders needs prefixing registers

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Post-training quantization of vision encoders (e.g., CLIP) often suffers severe accuracy degradation—particularly at 8-bit precision—due to prominent outliers in intermediate-layer activations. This paper identifies a key distinction: outlier distributions in vision encoders fundamentally differ from those in language models. To address this without fine-tuning, we propose RegCache, a lightweight, inference-only quantization optimization method. RegCache injects prefix tokens into critical intermediate layers and dynamically prunes redundant ones via token scheduling, selectively suppressing outlier activations at semantically sensitive positions. Crucially, it preserves the original model architecture and introduces no trainable parameters. Extensive experiments on both text-supervised and self-supervised vision encoders demonstrate that RegCache boosts 8-bit quantized model Top-1 accuracy by 4.2–6.8 percentage points on average, closely matching full-precision performance.

Technology Category

Application Category

📝 Abstract
Transformer-based vision encoders -- such as CLIP -- are central to multimodal intelligence, powering applications from autonomous web agents to robotic control. Since these applications often demand real-time processing of massive visual data, reducing the inference cost of vision encoders is critical. Post-training quantization offers a practical path, but remains challenging even at 8-bit precision due to massive-scale activations (i.e., outliers). In this work, we propose $ extit{RegCache}$, a training-free algorithm to mitigate outliers in vision encoders, enabling quantization with significantly smaller accuracy drops. The proposed RegCache introduces outlier-prone yet semantically meaningless prefix tokens to the target vision encoder, which prevents other tokens from having outliers. Notably, we observe that outliers in vision encoders behave differently from those in language models, motivating two technical innovations: middle-layer prefixing and token deletion. Experiments show that our method consistently improves the accuracy of quantized models across both text-supervised and self-supervised vision encoders.
Problem

Research questions and friction points this paper is trying to address.

Mitigates outliers in vision encoders for post-training quantization
Enables quantization with minimal accuracy loss in vision models
Addresses outlier behavior differences between vision and language encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prefix tokens mitigate outliers in vision encoders
Middle-layer prefixing handles vision-specific outlier behavior
Token deletion enhances quantization accuracy in encoders
🔎 Similar Papers
No similar papers found.