CR-QAT: Curriculum Relational Quantization-Aware Training for Open-Vocabulary Object Detection

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Extreme low-bit quantization severely degrades fine-grained vision-language alignment and regional relational structures in open-vocabulary object detection, leading to significant performance drops. To address this, this work proposes a Curriculum-based Relation-aware Quantization-Aware Training framework (CR-QAT), which innovatively integrates Curriculum Quantization-Aware Training (CQAT) with Text-anchored Relation Knowledge Distillation (TRKD). By progressively applying quantization and constructing a text-guided multi-dimensional pairwise similarity matrix, CR-QAT jointly optimizes model compression and semantic alignment. Evaluated on the LVIS and COCO zero-shot benchmarks, the proposed method achieves remarkable improvements under 4-bit quantization, boosting average precision (AP) by 38.9% and 40.9%, respectively, substantially outperforming existing quantization approaches.

Technology Category

Application Category

📝 Abstract
Open-vocabulary object detection (OVOD) enables novel category detection via vision-language alignment, but massive model sizes hinder deployment on resource-constrained devices. While quantization offers practical compression, we reveal that naive extreme low-bit (e.g., 4-bit) quantization severely degrades fine-grained vision-language alignment and distorts inter-region relational structures. To address this, we propose curriculum relational quantization-aware training (CR-QAT), an integrated framework combining stage-by-stage optimization with relational knowledge distillation. Within CR-QAT, curriculum QAT (CQAT) mitigates error accumulation by partitioning the model for progressive quantization, ensuring stable optimization via error isolation. Concurrently, text-centric relational KD (TRKD) is applied to task-relevant modules. By constructing text-anchored pairwise similarity matrices, TRKD comprehensively transfers the teacher's multi-dimensional relational knowledge. Experiments on LVIS and COCO zero-shot benchmarks demonstrate that CR-QAT consistently outperforms existing QAT baselines under aggressive low-bit settings, achieving relative AP improvements of up to 38.9% and 40.9%, respectively.
Problem

Research questions and friction points this paper is trying to address.

open-vocabulary object detection
low-bit quantization
vision-language alignment
relational structure distortion
model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curriculum Quantization-Aware Training
Relational Knowledge Distillation
Open-Vocabulary Object Detection
Low-Bit Quantization
Vision-Language Alignment
🔎 Similar Papers
No similar papers found.