HierarchicalPrune: Position-Aware Compression for Large-Scale Diffusion Models

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inference inefficiency of large-scale text-to-image diffusion models (8–11B parameters) on resource-constrained devices, this paper proposes a functional-hierarchy-based compression framework. The method leverages an empirically discovered semantic–textural functional hierarchy within diffusion modules, enabling hierarchical position pruning, position-aware weight preservation, and block-level sensitivity-guided knowledge distillation; it further integrates INT4 weight quantization. Experiments demonstrate that the approach reduces model memory footprint by 77.5%–80.4% (from 15.8 GB to 3.2 GB) and inference latency by 27.9%–38.0%, while incurring only marginal degradation in generation quality—GenEval and HPSv2 scores drop by just 2.6% and 7.0%, respectively. Visual fidelity remains nearly indistinguishable from the original model and substantially surpasses existing compression techniques.

Technology Category

Application Category

📝 Abstract
State-of-the-art text-to-image diffusion models (DMs) achieve remarkable quality, yet their massive parameter scale (8-11B) poses significant challenges for inferences on resource-constrained devices. In this paper, we present HierarchicalPrune, a novel compression framework grounded in a key observation: DM blocks exhibit distinct functional hierarchies, where early blocks establish semantic structures while later blocks handle texture refinements. HierarchicalPrune synergistically combines three techniques: (1) Hierarchical Position Pruning, which identifies and removes less essential later blocks based on position hierarchy; (2) Positional Weight Preservation, which systematically protects early model portions that are essential for semantic structural integrity; and (3) Sensitivity-Guided Distillation, which adjusts knowledge-transfer intensity based on our discovery of block-wise sensitivity variations. As a result, our framework brings billion-scale diffusion models into a range more suitable for on-device inference, while preserving the quality of the output images. Specifically, when combined with INT4 weight quantisation, HierarchicalPrune achieves 77.5-80.4% memory footprint reduction (e.g., from 15.8 GB to 3.2 GB) and 27.9-38.0% latency reduction, measured on server and consumer grade GPUs, with the minimum drop of 2.6% in GenEval score and 7% in HPSv2 score compared to the original model. Last but not least, our comprehensive user study with 85 participants demonstrates that HierarchicalPrune maintains perceptual quality comparable to the original model while significantly outperforming prior works.
Problem

Research questions and friction points this paper is trying to address.

Reducing parameter scale of large diffusion models
Preserving output quality during model compression
Enabling on-device inference for resource-constrained devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Position Pruning removes less essential blocks
Positional Weight Preservation protects early semantic structures
Sensitivity-Guided Distillation adjusts knowledge-transfer intensity