AHCPTQ: Accurate and Hardware-Compatible Post-Training Quantization for Segment Anything Model

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe accuracy degradation in post-training quantization (PTQ) of the Segment Anything Model (SAM), caused by the heavy-tailed, skewed distribution of GELU activations and large inter-channel distribution variance in linear layers, this paper proposes a hardware-friendly high-accuracy PTQ method. First, we introduce Hybrid Logarithmic-Uniform Quantization (HLUQ), the first quantization scheme explicitly designed to model heavy-tailed activation distributions. Second, we propose Channel-Aware Grouping (CAG), a dynamic clustering strategy that groups channels with similar statistical distributions to share quantization parameters, striking an optimal balance between accuracy and hardware efficiency. Under the W4A4 configuration on SAM-L, our method achieves an instance segmentation mAP of 36.6%—a mere 1.2% drop from full precision—while FPGA implementation delivers 7.89× speedup and 8.64× energy efficiency improvement over the floating-point baseline, significantly advancing SAM’s edge deployment.

Technology Category

Application Category

📝 Abstract
The Segment Anything Model (SAM) has demonstrated strong versatility across various visual tasks. However, its large storage requirements and high computational cost pose challenges for practical deployment. Post-training quantization (PTQ) has emerged as an effective strategy for efficient deployment, but we identify two key challenges in SAM that hinder the effectiveness of existing PTQ methods: the heavy-tailed and skewed distribution of post-GELU activations, and significant inter-channel variation in linear projection activations. To address these challenges, we propose AHCPTQ, an accurate and hardware-efficient PTQ method for SAM. AHCPTQ introduces hardware-compatible Hybrid Log-Uniform Quantization (HLUQ) to manage post-GELU activations, employing log2 quantization for dense small values and uniform quantization for sparse large values to enhance quantization resolution. Additionally, AHCPTQ incorporates Channel-Aware Grouping (CAG) to mitigate inter-channel variation by progressively clustering activation channels with similar distributions, enabling them to share quantization parameters and improving hardware efficiency. The combination of HLUQ and CAG not only enhances quantization effectiveness but also ensures compatibility with efficient hardware execution. For instance, under the W4A4 configuration on the SAM-L model, AHCPTQ achieves 36.6% mAP on instance segmentation with the DINO detector, while achieving a 7.89x speedup and 8.64x energy efficiency over its floating-point counterpart in FPGA implementation.
Problem

Research questions and friction points this paper is trying to address.

Addresses large storage and computational costs of SAM.
Overcomes heavy-tailed, skewed post-GELU activation distributions.
Reduces inter-channel variation in linear projection activations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid Log-Uniform Quantization for GELU activations
Channel-Aware Grouping reduces inter-channel variation
Hardware-efficient PTQ enhances speed and energy efficiency