MCAQ-YOLO: Morphological Complexity-Aware Quantization for Efficient Object Detection with Curriculum Learning

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of uniform quantization accuracy caused by heterogeneous spatial complexity in visual data, this paper proposes a morphology-aware adaptive quantization framework. We introduce five morphological metrics—fractal dimension, texture entropy, gradient variance, edge density, and contour complexity—to quantify local structural complexity, and integrate them with a curriculum learning strategy to enable dynamic bit-width allocation and progressive quantization training. Evaluated on a safety equipment detection task, our method achieves 85.6% mAP@0.5 at an average bit-width of 4.2 bits—outperforming uniform 4-bit quantization by 3.5 percentage points—while attaining a 7.6× model compression ratio. Inference latency increases by only 1.8 ms per image. The framework significantly enhances quantization accuracy, computational efficiency, and robustness without compromising practical deployability.

Technology Category

Application Category

📝 Abstract
Most neural network quantization methods apply uniform bit precision across spatial regions, ignoring the heterogeneous structural and textural complexity of visual data. This paper introduces MCAQ-YOLO, a morphological complexity-aware quantization framework for object detection. The framework employs five morphological metrics - fractal dimension, texture entropy, gradient variance, edge density, and contour complexity - to characterize local visual morphology and guide spatially adaptive bit allocation. By correlating these metrics with quantization sensitivity, MCAQ-YOLO dynamically adjusts bit precision according to spatial complexity. In addition, a curriculum-based quantization-aware training scheme progressively increases quantization difficulty to stabilize optimization and accelerate convergence. Experimental results demonstrate a strong correlation between morphological complexity and quantization sensitivity and show that MCAQ-YOLO achieves superior detection accuracy and convergence efficiency compared with uniform quantization. On a safety equipment dataset, MCAQ-YOLO attains 85.6 percent mAP@0.5 with an average of 4.2 bits and a 7.6x compression ratio, yielding 3.5 percentage points higher mAP than uniform 4-bit quantization while introducing only 1.8 ms of additional runtime overhead per image. Cross-dataset validation on COCO and Pascal VOC further confirms consistent performance gains, indicating that morphology-driven spatial quantization can enhance efficiency and robustness for computationally constrained, safety-critical visual recognition tasks.
Problem

Research questions and friction points this paper is trying to address.

Uniform quantization ignores varying visual complexity across spatial regions
Existing methods lack spatial adaptation to morphological data characteristics
Quantization sensitivity requires correlation with local structural complexity metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses morphological metrics to guide adaptive bit allocation
Employs curriculum learning to progressively increase quantization difficulty
Dynamically adjusts bit precision based on spatial complexity
🔎 Similar Papers
No similar papers found.
Y
Yoonjae Seo
Department of Architectural Engineering, Sejong University, Seoul, South Korea
E
E. Elbasani
Department of Architectural Engineering, Sejong University, Seoul, South Korea
Jaehong Lee
Jaehong Lee
Professor of Deep Learning Architecture Research Center, Sejong University
Computational Mechanics