🤖 AI Summary
Extreme low-bit quantization severely degrades fine-grained vision-language alignment and regional relational structures in open-vocabulary object detection, leading to significant performance drops. To address this, this work proposes a Curriculum-based Relation-aware Quantization-Aware Training framework (CR-QAT), which innovatively integrates Curriculum Quantization-Aware Training (CQAT) with Text-anchored Relation Knowledge Distillation (TRKD). By progressively applying quantization and constructing a text-guided multi-dimensional pairwise similarity matrix, CR-QAT jointly optimizes model compression and semantic alignment. Evaluated on the LVIS and COCO zero-shot benchmarks, the proposed method achieves remarkable improvements under 4-bit quantization, boosting average precision (AP) by 38.9% and 40.9%, respectively, substantially outperforming existing quantization approaches.
📝 Abstract
Open-vocabulary object detection (OVOD) enables novel category detection via vision-language alignment, but massive model sizes hinder deployment on resource-constrained devices. While quantization offers practical compression, we reveal that naive extreme low-bit (e.g., 4-bit) quantization severely degrades fine-grained vision-language alignment and distorts inter-region relational structures. To address this, we propose curriculum relational quantization-aware training (CR-QAT), an integrated framework combining stage-by-stage optimization with relational knowledge distillation. Within CR-QAT, curriculum QAT (CQAT) mitigates error accumulation by partitioning the model for progressive quantization, ensuring stable optimization via error isolation. Concurrently, text-centric relational KD (TRKD) is applied to task-relevant modules. By constructing text-anchored pairwise similarity matrices, TRKD comprehensively transfers the teacher's multi-dimensional relational knowledge. Experiments on LVIS and COCO zero-shot benchmarks demonstrate that CR-QAT consistently outperforms existing QAT baselines under aggressive low-bit settings, achieving relative AP improvements of up to 38.9% and 40.9%, respectively.