Hierarchy-Aware Fine-Tuning of Vision-Language Models

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead and cross-level prediction inconsistency of Vision-Language Models (VLMs) in hierarchical classification, this paper proposes a lightweight structure-aware fine-tuning framework. Built upon a shared embedding space, it introduces, for the first time, a joint optimization of Tree-Path KL Divergence (TP-KL) and Hierarchical Sibling-smooth Cross-Entropy (HiSCE) within the LoRA adaptation paradigm—enabling fine-grained semantic alignment along class hierarchies and consistent discrimination among sibling classes. The method incurs less than 0.5% additional parameters and achieves significant improvements in full-path accuracy across multiple benchmarks, while reducing tree-structural inconsistency errors. Key contributions include: (1) the first dual-objective co-optimization mechanism specifically designed for hierarchical classification; and (2) the first unified modeling of hierarchy-aware semantic alignment and discriminative consistency within the LoRA framework.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) learn powerful multimodal representations through large-scale image-text pretraining, but adapting them to hierarchical classification is underexplored. Standard approaches treat labels as flat categories and require full fine-tuning, which is expensive and produces inconsistent predictions across taxonomy levels. We propose an efficient hierarchy-aware fine-tuning framework that updates a few parameters while enforcing structural consistency. We combine two objectives: Tree-Path KL Divergence (TP-KL) aligns predictions along the ground-truth label path for vertical coherence, while Hierarchy-Sibling Smoothed Cross-Entropy (HiSCE) encourages consistent predictions among sibling classes. Both losses work in the VLM's shared embedding space and integrate with lightweight LoRA adaptation. Experiments across multiple benchmarks show consistent improvements in Full-Path Accuracy and Tree-based Inconsistency Error with minimal parameter overhead. Our approach provides an efficient strategy for adapting VLMs to structured taxonomies.
Problem

Research questions and friction points this paper is trying to address.

Adapt VLMs to hierarchical classification efficiently
Enforce structural consistency across taxonomy levels
Improve accuracy and reduce inconsistency with minimal parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchy-aware fine-tuning with structural consistency
Combines Tree-Path KL Divergence and Hierarchy-Sibling Cross-Entropy
Integrates lightweight LoRA adaptation for efficient parameter updates
🔎 Similar Papers
J
Jiayu Li
University of Washington
R
Rajesh Gangireddy
Intel
Samet Akcay
Samet Akcay
AI Research Engineer at Intel
Computer VisionMachine LearningAnomaly Detection
W
Wei Cheng
University of Washington
J
Juhua Hu
University of Washington