🤖 AI Summary
Existing biomedical benchmarks largely neglect the most challenging PICO element—Outcome (clinical outcomes)—and lack high-quality, fine-grained annotated datasets. Method: We construct the first large-scale clinical outcome annotation dataset, covering 500 PubMed abstracts and an EBM-NLP subset; explicitly define and annotate fine-grained types of “clinically meaningful outcomes” through collaborative development of annotation guidelines by clinicians and NLP experts, achieving high inter-annotator agreement (Cohen’s κ = 0.76); propose an iteratively refined clinical NLP annotation protocol, fine-tune PubMedBERT, and evaluate at both entity-level and token-level granularity. Contribution/Results: Our model achieves entity-level F1 = 0.69 and token-level F1 = 0.76 on the EBM-NLP subset. The dataset is publicly released, establishing a new benchmark for automated clinical outcome extraction in evidence-based medicine.
📝 Abstract
The fundamental process of evidence extraction and synthesis in evidence-based medicine involves extracting PICO (Population, Intervention, Comparison, and Outcome) elements from biomedical literature. However, Outcomes, being the most complex elements, are often neglected or oversimplified in existing benchmarks. To address this issue, we present EvidenceOutcomes, a novel, large, annotated corpus of clinically meaningful outcomes extracted from biomedical literature. We first developed a robust annotation guideline for extracting clinically meaningful outcomes from text through iteration and discussion with clinicians and Natural Language Processing experts. Then, three independent annotators annotated the Results and Conclusions sections of a randomly selected sample of 500 PubMed abstracts and 140 PubMed abstracts from the existing EBM-NLP corpus. This resulted in EvidenceOutcomes with high-quality annotations of an inter-rater agreement of 0.76. Additionally, our fine-tuned PubMedBERT model, applied to these 500 PubMed abstracts, achieved an F1-score of 0.69 at the entity level and 0.76 at the token level on the subset of 140 PubMed abstracts from the EBM-NLP corpus. EvidenceOutcomes can serve as a shared benchmark to develop and test future machine learning algorithms to extract clinically meaningful outcomes from biomedical abstracts.