Quantization Robustness to Input Degradations for Object Detection

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the robustness of YOLO-family object detectors under post-training quantization (PTQ) when exposed to realistic input degradations—including noise, blur, low contrast, and JPEG compression. We systematically evaluate performance degradation across FP32, FP16, dynamic UINT8, and static INT8 quantization formats. To enhance stability under distortion, we propose a degradation-aware calibration strategy: synthetic degraded images are incorporated during the static INT8 calibration phase. Experiments are conducted on COCO using TensorRT/ONNX deployment, with mAP<sub>50–95</sub> as the primary metric. Results show that static INT8 yields 1.5–3.3× inference acceleration but incurs a 3–7% mAP drop on clean data. Degradation-aware calibration does not universally improve robustness; however, it significantly enhances stability for larger models under specific degradation types—revealing nontrivial interactions between model scale and degradation characteristics.

Technology Category

Application Category

📝 Abstract
Post-training quantization (PTQ) is crucial for deploying efficient object detection models, like YOLO, on resource-constrained devices. However, the impact of reduced precision on model robustness to real-world input degradations such as noise, blur, and compression artifacts is a significant concern. This paper presents a comprehensive empirical study evaluating the robustness of YOLO models (nano to extra-large scales) across multiple precision formats: FP32, FP16 (TensorRT), Dynamic UINT8 (ONNX), and Static INT8 (TensorRT). We introduce and evaluate a degradation-aware calibration strategy for Static INT8 PTQ, where the TensorRT calibration process is exposed to a mix of clean and synthetically degraded images. Models were benchmarked on the COCO dataset under seven distinct degradation conditions (including various types and levels of noise, blur, low contrast, and JPEG compression) and a mixed-degradation scenario. Results indicate that while Static INT8 TensorRT engines offer substantial speedups (~1.5-3.3x) with a moderate accuracy drop (~3-7% mAP50-95) on clean data, the proposed degradation-aware calibration did not yield consistent, broad improvements in robustness over standard clean-data calibration across most models and degradations. A notable exception was observed for larger model scales under specific noise conditions, suggesting model capacity may influence the efficacy of this calibration approach. These findings highlight the challenges in enhancing PTQ robustness and provide insights for deploying quantized detectors in uncontrolled environments. All code and evaluation tables are available at https://github.com/AllanK24/QRID.
Problem

Research questions and friction points this paper is trying to address.

Evaluating quantization robustness to input degradations like noise and blur
Assessing impact of reduced precision on YOLO models across scales
Testing degradation-aware calibration strategy for improved model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Degradation-aware calibration strategy for PTQ
Evaluated multiple precision formats on YOLO
Exposed TensorRT calibration to degraded images