🤖 AI Summary
Addressing the low-resource challenge in medical literature entity extraction—characterized by domain specificity and severe scarcity of annotated data—this paper proposes a span-based approach integrating domain-adapted pretraining with few-shot learning. We build an end-to-end entity recognition framework upon PubMedBERT, replacing conventional CRF or Seq2Seq decoding with span-level modeling and efficient fine-tuning strategies. We present the first systematic empirical validation of PubMedBERT’s superiority for this task and demonstrate that span-based modeling significantly outperforms mainstream sequence-labeling paradigms. Experimental results show an F1 score of 88.8% under full supervision, and remarkably, achieve 79.1% F1 with only ten labeled examples (10-shot)—substantially surpassing traditional supervised baselines. This work establishes a reproducible, highly robust paradigm for low-resource biomedical information extraction.
📝 Abstract
This study proposes a medical entity extraction method based on Transformer to enhance the information extraction capability of medical literature. Considering the professionalism and complexity of medical texts, we compare the performance of different pre-trained language models (BERT, BioBERT, PubMedBERT, ClinicalBERT) in medical entity extraction tasks. Experimental results show that PubMedBERT achieves the best performance (F1-score = 88.8%), indicating that a language model pre-trained on biomedical literature is more effective in the medical domain. In addition, we analyze the impact of different entity extraction methods (CRF, Span-based, Seq2Seq) and find that the Span-based approach performs best in medical entity extraction tasks (F1-score = 88.6%). It demonstrates superior accuracy in identifying entity boundaries. In low-resource scenarios, we further explore the application of Few-shot Learning in medical entity extraction. Experimental results show that even with only 10-shot training samples, the model achieves an F1-score of 79.1%, verifying the effectiveness of Few-shot Learning under limited data conditions. This study confirms that the combination of pre-trained language models and Few-shot Learning can enhance the accuracy of medical entity extraction. Future research can integrate knowledge graphs and active learning strategies to improve the model's generalization and stability, providing a more effective solution for medical NLP research. Keywords- Natural Language Processing, medical named entity recognition, pre-trained language model, Few-shot Learning, information extraction, deep learning