HEART-VIT: Hessian-Guided Efficient Dynamic Attention and Token Pruning in Vision Transformer

πŸ“… 2025-12-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Vision Transformers (ViTs) face dual challenges in edge deployment: quadratic attention overhead and computational redundancy, while existing pruning methods struggle to balance accuracy, generalization, and hardware adaptability. This paper proposes the first unified, second-order curvature-driven, input-adaptive framework for joint dynamic pruning of attention heads and image tokens. We introduce a novel Hessian-guided dual-granularity sensitivity analysis, revealing complementary rolesβ€”token pruning dominates FLOPs reduction, whereas head pruning refines residual redundancy. Leveraging efficient Hessian-vector products to estimate curvature-weighted sensitivity, we integrate dynamic gating with loss-budget constraints for end-to-end joint pruning. On ImageNet, our method achieves up to 49.4% FLOPs reduction, 36% latency decrease, and 46% throughput improvement; after fine-tuning, accuracy matches or exceeds the baseline (e.g., +4.7% recovery under 40% token pruning). Significant energy-efficiency gains are validated on edge platforms including NVIDIA AGX Orin.

Technology Category

Application Category

πŸ“ Abstract
Vision Transformers (ViTs) deliver state-of-the-art accuracy but their quadratic attention cost and redundant computations severely hinder deployment on latency and resource-constrained platforms. Existing pruning approaches treat either tokens or heads in isolation, relying on heuristics or first-order signals, which often sacrifice accuracy or fail to generalize across inputs. We introduce HEART-ViT, a Hessian-guided efficient dynamic attention and token pruning framework for vision transformers, which to the best of our knowledge is the first unified, second-order, input-adaptive framework for ViT optimization. HEART-ViT estimates curvature-weighted sensitivities of both tokens and attention heads using efficient Hessian-vector products, enabling principled pruning decisions under explicit loss budgets.This dual-view sensitivity reveals an important structural insight: token pruning dominates computational savings, while head pruning provides fine-grained redundancy removal, and their combination achieves a superior trade-off. On ImageNet-100 and ImageNet-1K with ViT-B/16 and DeiT-B/16, HEART-ViT achieves up to 49.4 percent FLOPs reduction, 36 percent lower latency, and 46 percent higher throughput, while consistently matching or even surpassing baseline accuracy after fine-tuning, for example 4.7 percent recovery at 40 percent token pruning. Beyond theoretical benchmarks, we deploy HEART-ViT on different edge devices such as AGX Orin, demonstrating that our reductions in FLOPs and latency translate directly into real-world gains in inference speed and energy efficiency. HEART-ViT bridges the gap between theory and practice, delivering the first unified, curvature-driven pruning framework that is both accuracy-preserving and edge-efficient.
Problem

Research questions and friction points this paper is trying to address.

Reduces Vision Transformers' quadratic attention cost and redundant computations for edge deployment.
Unifies token and head pruning using Hessian-guided, input-adaptive second-order optimization.
Achieves computational savings while maintaining or improving accuracy on resource-constrained platforms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hessian-guided dynamic token and attention head pruning
Unified second-order framework for ViT optimization
Efficient Hessian-vector products for curvature-weighted sensitivity estimation
πŸ”Ž Similar Papers
No similar papers found.