Benchmarking GPT-5 for biomedical natural language processing

📅 2025-08-28
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This study presents the first systematic evaluation of GPT-5’s zero- to five-shot generalization across a comprehensive biomedical NLP (BioNLP) multitask benchmark. We assess performance on six core tasks—named entity recognition, relation extraction, document classification, question answering, summarization, and text simplification—spanning 12 standard datasets. A unified prompt template, fixed decoding parameters, and standardized batched inference ensure fair comparison against GPT-4o and GPT-4. Results show that GPT-5 achieves a macro-average F1 of 0.557 in the five-shot setting, significantly surpassing prior models. It attains 94.1% accuracy on MedQA—exceeding the best supervised model by over 50 percentage points—and establishes new state-of-the-art results on chemical NER (F1 = 0.886) and ChemProt relation extraction (F1 = 0.616). This work provides critical empirical evidence on large language models’ domain-specific reasoning capabilities and few-shot adaptability in biomedicine.

Technology Category

Application Category

📝 Abstract
The rapid expansion of biomedical literature has heightened the need for scalable natural language processing (NLP) solutions. While GPT-4 substantially narrowed the gap with task-specific systems, especially in question answering, its performance across other domains remained uneven. We updated a standardized BioNLP benchmark to evaluate GPT-5 and GPT-4o under zero-, one-, and five-shot prompting across 12 datasets spanning six task families: named entity recognition, relation extraction, multi-label document classification, question answering, text summarization, and text simplification. Using fixed prompt templates, identical decoding parameters, and batch inference, we report primary metrics per dataset and include prior results for GPT-4, GPT-3.5, and LLaMA-2-13B for comparison. GPT-5 achieved the strongest overall benchmark performance, with macro-average scores rising to 0.557 under five-shot prompting versus 0.506 for GPT-4 and 0.508 for GPT-4o. On MedQA, GPT-5 reached 94.1% accuracy, exceeding the previous supervised state of the art by over fifty points, and attained parity with supervised systems on PubMedQA (0.734). In extraction tasks, GPT-5 delivered major gains in chemical NER (0.886 F1) and ChemProt relation extraction (0.616 F1), outperforming GPT-4 and GPT-4o, though summarization and disease NER still lagged behind domain-specific baselines. These results establish GPT-5 as a general-purpose model now offering deployment-ready performance for reasoning-oriented biomedical QA, while precision-critical extraction and evidence-dense summarization continue to favor fine-tuned or hybrid approaches. The benchmark delineates where simple prompting suffices and where retrieval-augmented or planning-based scaffolds are likely required, providing actionable guidance for BioNLP system design as frontier models advance.
Problem

Research questions and friction points this paper is trying to address.

Evaluating GPT-5's performance on biomedical NLP tasks
Assessing model capabilities in biomedical reasoning and QA
Comparing cost-effectiveness and latency with previous models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated GPT-5 using standardized prompts and decoding parameters
Assessed model performance across biomedical NLP tasks and QA datasets
Proposed tiered prompting strategy for cost-sensitive and complex scenarios
Y
Yu Hou
Division of Computational Health Sciences, University of Minnesota, Minneapolis, Minnesota, USA
Zaifu Zhan
Zaifu Zhan
PhD at University of Minnesota, MS at Tsinghua University
Natural language processingMachine LearningAI for BiomedicineLarge Language model
R
Rui Zhang
Division of Computational Health Sciences, University of Minnesota, Minneapolis, Minnesota, USA