QUAD-LLM-MLTC: Large Language Models Ensemble Learning for Healthcare Text Multi-Label Classification

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of scarce annotated data and complex semantics in medical text multi-label classification (MLTC), this paper proposes the first zero-shot, four-model collaborative pipeline: integrating BERT (for key-token extraction), PEGASUS (for prompt-driven text augmentation), GPT-4o (for zero-shot label generation), and BART (for probabilistic topic modeling), enhanced by prompt engineering, probabilistic topic assignment, and meta-classifier ensemble. Crucially, the framework requires no fine-tuning or labeled training data. Evaluated on three real-world medical text datasets, it achieves an F1 score of 78.17% and a Micro-F1 of 80.16%, with low standard deviations of 0.025 and 0.011, respectively—significantly outperforming single-model baselines and state-of-the-art supervised and semi-supervised methods. This work establishes a scalable, interpretable paradigm for low-resource medical NLP tasks.

Technology Category

Application Category

📝 Abstract
The escalating volume of collected healthcare textual data presents a unique challenge for automated Multi-Label Text Classification (MLTC), which is primarily due to the scarcity of annotated texts for training and their nuanced nature. Traditional machine learning models often fail to fully capture the array of expressed topics. However, Large Language Models (LLMs) have demonstrated remarkable effectiveness across numerous Natural Language Processing (NLP) tasks in various domains, which show impressive computational efficiency and suitability for unsupervised learning through prompt engineering. Consequently, these LLMs promise an effective MLTC of medical narratives. However, when dealing with various labels, different prompts can be relevant depending on the topic. To address these challenges, the proposed approach, QUAD-LLM-MLTC, leverages the strengths of four LLMs: GPT-4o, BERT, PEGASUS, and BART. QUAD-LLM-MLTC operates in a sequential pipeline in which BERT extracts key tokens, PEGASUS augments textual data, GPT-4o classifies, and BART provides topics' assignment probabilities, which results in four classifications, all in a 0-shot setting. The outputs are then combined using ensemble learning and processed through a meta-classifier to produce the final MLTC result. The approach is evaluated using three samples of annotated texts, which contrast it with traditional and single-model methods. The results show significant improvements across the majority of the topics in the classification's F1 score and consistency (F1 and Micro-F1 scores of 78.17% and 80.16% with standard deviations of 0.025 and 0.011, respectively). This research advances MLTC using LLMs and provides an efficient and scalable solution to rapidly categorize healthcare-related text data without further training.
Problem

Research questions and friction points this paper is trying to address.

Automated Multi-Label Text Classification in healthcare
Leveraging ensemble learning with four LLMs
Improving classification accuracy without additional training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensembles four LLMs for MLTC
Sequential pipeline enhances classification
Meta-classifier combines model outputs
🔎 Similar Papers
No similar papers found.
Hajar Sakai
Hajar Sakai
Ph.D. in Industrial and Systems Engineering
Large Language ModelsText ClassificationTime Series Forecasting
S
Sarah S. Lam
School of Systems Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, USA