🤖 AI Summary
Hierarchical text classification (HTC) faces challenges of data scarcity and high model complexity. Method: This paper investigates few-shot and zero-shot HTC using black-box large language models (LLMs) via API calls—bypassing reliance on extensive labeled data and computational training resources. We propose and comparatively evaluate three prompting strategies: (i) direct leaf-label prediction, (ii) direct hierarchical-label prediction, and (iii) top-down multi-step prediction. Contribution/Results: Experiments show that direct hierarchical-label prediction achieves the best performance on deep category hierarchies, substantially outperforming conventional supervised methods. Few-shot settings consistently improve accuracy, demonstrating LLMs’ strong generalization capability for HTC. Moreover, we empirically identify a positive correlation between performance gains and API invocation costs as hierarchy depth increases—revealing a critical accuracy-cost trade-off. These findings provide empirical grounding and strategic guidance for efficient deployment of LLM-based HTC systems.
📝 Abstract
Hierarchical Text Classification (HTC) aims to assign texts to structured label hierarchies; however, it faces challenges due to data scarcity and model complexity. This study explores the feasibility of using black box Large Language Models (LLMs) accessed via APIs for HTC, as an alternative to traditional machine learning methods that require extensive labeled data and computational resources. We evaluate three prompting strategies -- Direct Leaf Label Prediction (DL), Direct Hierarchical Label Prediction (DH), and Top-down Multi-step Hierarchical Label Prediction (TMH) -- in both zero-shot and few-shot settings, comparing the accuracy and cost-effectiveness of these strategies. Experiments on two datasets show that a few-shot setting consistently improves classification accuracy compared to a zero-shot setting. While a traditional machine learning model achieves high accuracy on a dataset with a shallow hierarchy, LLMs, especially DH strategy, tend to outperform the machine learning model on a dataset with a deeper hierarchy. API costs increase significantly due to the higher input tokens required for deeper label hierarchies on DH strategy. These results emphasize the trade-off between accuracy improvement and the computational cost of prompt strategy. These findings highlight the potential of black box LLMs for HTC while underscoring the need to carefully select a prompt strategy to balance performance and cost.