HiPP-Prune: Hierarchical Preference-Conditioned Structured Pruning for Vision-Language Models

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that compressing vision-language models (VLMs) often degrades task performance and exacerbates visual hallucinations, making it difficult to simultaneously preserve cross-modal alignment and enable efficient deployment. To this end, we propose a hierarchical preference-conditioned structured pruning framework that formulates pruning as a conditional resource allocation problem under multiple objectives. The approach generates a global pruning blueprint guided by a user-specified preference vector and integrates visual sensitivity signals to protect critical multimodal fusion layers. Innovatively, we introduce a hierarchical decision mechanism coupled with multi-objective reinforcement learning based on Group Relative Policy Optimization, enabling the first queryable trade-off between utility and robustness. Experiments on LLaVA demonstrate that our method produces diverse, non-dominated pruning solutions that significantly outperform baselines at the same sparsity level, effectively balancing performance, hallucination suppression, and compression ratio.

Technology Category

Application Category

📝 Abstract
Pruning vision-language models (VLMs) for efficient deployment is challenging because compression can affect not only task utility but also visual grounding, often amplifying object hallucinations even at the same sparsity level. We present HiPP-Prune, a hierarchical preference-conditioned structured pruning framework that treats pruning as conditional resource allocation under multiple objectives. HiPP-Prune makes plan-level decisions: a single policy invocation outputs a global pruning blueprint by factorizing decisions into an overall sparsity budget and a layer-wise allocation, enabling queryable trade-offs via a user-specified preference vector. To account for VLM-specific failure modes, our policy state integrates a visual sensitivity signal derived from attention flow between vision tokens and language hidden states, discouraging over-pruning of vision-critical layers that facilitate cross-modal fusion. We optimize pruning plans with plan-level Group Relative Policy Optimization (GRPO) under a multi-objective return that combines task utility, hallucination robustness (POPE), compression, and a synaptic-flow-inspired stability proxy to reduce unproductive exploration in high-sparsity regimes. Experiments on LLaVA with POPE and ScienceQA demonstrate that HiPP-Prune discovers diverse non-dominated pruning plans and provides controllable robustness--utility trade-offs under matched sparsity budgets.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
structured pruning
object hallucination
visual grounding
model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured pruning
vision-language models
preference-conditioned optimization
hallucination robustness
multi-objective resource allocation
🔎 Similar Papers
No similar papers found.