🤖 AI Summary
Outdated, exclusionary, or non-patient-centered inappropriate language (IUL) in medical curricula undermines clinical training and patient care quality, yet manual screening of large-scale educational content is prohibitively costly.
Method: This work presents the first systematic evaluation of small language models (SLMs) for IUL detection in medical education texts. We propose a two-stage hierarchical multi-label classification framework and innovatively incorporate unlabeled samples as negative examples to enhance training. Our approach synergistically integrates fine-tuned SLMs with prompt-engineered large language models (LLMs), leveraging both in-context learning and binary/multi-label classification strategies.
Contribution/Results: Experiments demonstrate that SLMs outperform LLMs such as Llama-3; negative-sample augmentation improves key classifier AUC by 25%; and our method achieves state-of-the-art detection performance under limited labeled data. This establishes a scalable, high-accuracy automated paradigm for linguistic standardization in medical education.
📝 Abstract
The use of inappropriate language -- such as outdated, exclusionary, or non-patient-centered terms -- medical instructional materials can significantly influence clinical training, patient interactions, and health outcomes. Despite their reputability, many materials developed over past decades contain examples now considered inappropriate by current medical standards. Given the volume of curricular content, manually identifying instances of inappropriate use of language (IUL) and its subcategories for systematic review is prohibitively costly and impractical. To address this challenge, we conduct a first-in-class evaluation of small language models (SLMs) fine-tuned on labeled data and pre-trained LLMs with in-context learning on a dataset containing approximately 500 documents and over 12,000 pages. For SLMs, we consider: (1) a general IUL classifier, (2) subcategory-specific binary classifiers, (3) a multilabel classifier, and (4) a two-stage hierarchical pipeline for general IUL detection followed by multilabel classification. For LLMs, we consider variations of prompts that include subcategory definitions and/or shots. We found that both LLama-3 8B and 70B, even with carefully curated shots, are largely outperformed by SLMs. While the multilabel classifier performs best on annotated data, supplementing training with unflagged excerpts as negative examples boosts the specific classifiers' AUC by up to 25%, making them most effective models for mitigating harmful language in medical curricula.