🤖 AI Summary
This study systematically evaluates supervised single-label and multi-label text classification methods across mainstream benchmark datasets. Using a unified experimental framework with strict hyperparameter control—particularly learning rate—we comparatively analyze representative models: bag-of-n-grams (trigram-SVM), sequential (LSTM), graph-based (GCN), hierarchical, and pre-trained models (BERT, RoBERTa, T5). Key findings are: (1) Discriminative pre-trained models (e.g., BERT-family) consistently achieve state-of-the-art performance; (2) A carefully tuned lightweight trigram-SVM surpasses several recent models on multiple datasets; (3) Generative models (e.g., T5) exhibit competitive few-shot performance but lag significantly in overall accuracy; (4) We provide the first empirical evidence that prevalent comparative studies routinely neglect foundational hyperparameter optimization, undermining result robustness. All code, configurations, and evaluation scripts are publicly released.
📝 Abstract
We analyze various methods for single-label and multi-label text classification across well-known datasets, categorizing them into bag-of-words, sequence-based, graph-based, and hierarchical approaches. Despite the surge in methods like graph-based models, encoder-only pre-trained language models, notably BERT, remain state-of-the-art. However, recent findings suggest simpler models like logistic regression and trigram-based SVMs outperform newer techniques. While decoder-only generative language models show promise in learning with limited data, they lag behind encoder-only models in performance. We emphasize the superiority of discriminative language models like BERT over generative models for supervised tasks. Additionally, we highlight the literature's lack of robustness in method comparisons, particularly concerning basic hyperparameter optimizations like learning rate in fine-tuning encoder-only language models. Data availability: The source code is available at https://github.com/drndr/multilabel-text-clf All datasets used for our experiments are publicly available except the NYT dataset.