Hierarchy-Consistent Learning and Adaptive Loss Balancing for Hierarchical Multi-Label Classification

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of preserving structural consistency in hierarchical multi-label classification (HMC) and mitigating loss-weight imbalance in multi-task learning (MTL), this paper proposes HCAL, a novel HMC classifier. First, it introduces a semantically consistent hierarchical feature aggregation mechanism to strengthen semantic correlations between parent and child labels. Second, it incorporates an adaptive task weighting strategy to dynamically alleviate optimization bias caused by “one-dominant, many-weak” label distributions. Third, it proposes prototype perturbation augmentation—injecting controlled noise into label prototypes—to enhance decision-boundary robustness, and defines Hierarchical Violation Rate (HVR) as a quantitative metric for structural consistency. Experiments on three benchmark datasets demonstrate that HCAL consistently outperforms state-of-the-art baselines, achieving average improvements of 2.1–4.7 percentage points in classification accuracy and reducing HVR by 18.3–32.6%, thereby validating its superior generalizability, structural consistency, and robustness.

Technology Category

Application Category

📝 Abstract
Hierarchical Multi-Label Classification (HMC) faces critical challenges in maintaining structural consistency and balancing loss weighting in Multi-Task Learning (MTL). In order to address these issues, we propose a classifier called HCAL based on MTL integrated with prototype contrastive learning and adaptive task-weighting mechanisms. The most significant advantage of our classifier is semantic consistency including both prototype with explicitly modeling label and feature aggregation from child classes to parent classes. The other important advantage is an adaptive loss-weighting mechanism that dynamically allocates optimization resources by monitoring task-specific convergence rates. It effectively resolves the "one-strong-many-weak" optimization bias inherent in traditional MTL approaches. To further enhance robustness, a prototype perturbation mechanism is formulated by injecting controlled noise into prototype to expand decision boundaries. Additionally, we formalize a quantitative metric called Hierarchical Violation Rate (HVR) as to evaluate hierarchical consistency and generalization. Extensive experiments across three datasets demonstrate both the higher classification accuracy and reduced hierarchical violation rate of the proposed classifier over baseline models.
Problem

Research questions and friction points this paper is trying to address.

Maintaining structural consistency in hierarchical multi-label classification
Balancing loss weighting in multi-task learning frameworks
Resolving one-strong-many-weak optimization bias in MTL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prototype contrastive learning for semantic consistency
Adaptive task-weighting mechanism for loss balancing
Prototype perturbation to expand decision boundaries
🔎 Similar Papers
No similar papers found.