🤖 AI Summary
This study addresses the severe class imbalance in scientific multi-label text classification caused by the extreme long-tail distribution of domain-specific terms. To this end, the authors construct a large-scale corpus comprising 21,702 astrophysics paper abstracts annotated with 2,367 concepts from the Unified Astronomy Thesaurus. They propose a novel frequency-stratified evaluation strategy and systematically compare the performance of traditional machine learning models, neural networks, and lexically constrained large language models (LLMs). The findings reveal that lexically constrained LLMs achieve performance comparable to specialized models without domain-specific fine-tuning, while domain adaptation significantly improves classification of rare terms. The proposed evaluation framework effectively uncovers model robustness disparities across frequency strata, establishing a strong baseline and a new paradigm for tackling extreme imbalance in scientific text classification tasks.
📝 Abstract
Scientific multi-label text classification suffers from extreme class imbalance, where specialized terminology exhibits severe power-law distributions that challenge standard classification approaches. Existing scientific corpora lack comprehensive controlled vocabularies, focusing instead on broad categories and limiting systematic study of extreme imbalance. We introduce AstroConcepts, a corpus of English abstracts from 21,702 published astrophysics papers, labeled with 2,367 concepts from the Unified Astronomy Thesaurus. The corpus exhibits severe label imbalance, with 76% of concepts having fewer than 50 training examples. By releasing this resource, we enable systematic study of extreme class imbalance in scientific domains and establish strong baselines across traditional, neural, and vocabulary-constrained LLM methods. Our evaluation reveals three key patterns that provide new insights into scientific text classification. First, vocabulary-constrained LLMs achieve competitive performance relative to domain-adapted models in astrophysics classification, suggesting a potential for parameter-efficient approaches. Second, domain adaptation yields relatively larger improvements for rare, specialized terminology, although absolute performance remains limited across all methods. Third, we propose frequency-stratified evaluation to reveal performance patterns that are hidden by aggregate scores, thereby making robustness assessment central to scientific multi-label evaluation. These results offer actionable insights for scientific NLP and establish benchmarks for research on extreme imbalance.