A Comparative Study of Decoding Strategies in Medical Text Generation

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the impact of decoding strategies on text generation quality in medical large language models (LLMs). We evaluate 11 decoding methods—including beam search, top-k, and nucleus sampling—across five open-ended medical tasks (translation, summarization, question answering, dialogue, and image captioning) using multiple medical- and general-purpose LLMs of varying scales. Performance is assessed via BLEU, ROUGE, BERTScore, and MAUVE. Key findings are: (1) Deterministic strategies (e.g., beam search) consistently outperform stochastic sampling, with top-k and top-η sampling yielding the poorest results; (2) Decoding strategy exerts greater influence on output quality than model architecture or scale, especially for medical LLMs; (3) Increasing model size does not inherently improve decoding robustness, challenging the “larger-is-more-stable” assumption; (4) MAUVE demonstrates superior sensitivity to decoding-induced variations and exhibits low correlation with other metrics, highlighting its unique utility for evaluating medical text generation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) rely on various decoding strategies to generate text, and these choices can significantly affect output quality. In healthcare, where accuracy is critical, the impact of decoding strategies remains underexplored. We investigate this effect in five open-ended medical tasks, including translation, summarization, question answering, dialogue, and image captioning, evaluating 11 decoding strategies with medically specialized and general-purpose LLMs of different sizes. Our results show that deterministic strategies generally outperform stochastic ones: beam search achieves the highest scores, while η and top-k sampling perform worst. Slower decoding methods tend to yield better quality. Larger models achieve higher scores overall but have longer inference times and are no more robust to decoding. Surprisingly, while medical LLMs outperform general ones in two of the five tasks, statistical analysis shows no overall performance advantage and reveals greater sensitivity to decoding choice. We further compare multiple evaluation metrics and find that correlations vary by task, with MAUVE showing weak agreement with BERTScore and ROUGE, as well as greater sensitivity to the decoding strategy. These results highlight the need for careful selection of decoding methods in medical applications, as their influence can sometimes exceed that of model choice.
Problem

Research questions and friction points this paper is trying to address.

Evaluating decoding strategies' impact on medical text generation quality
Comparing deterministic and stochastic methods across five healthcare tasks
Assessing medical versus general LLMs' sensitivity to decoding choices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Beam search decoding for optimal performance
Deterministic strategies outperform stochastic ones
Larger models offer higher scores but slower inference
🔎 Similar Papers
No similar papers found.
O
Oriana Presacan
AI Multimedia Lab, CAMPUS Research Institute, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania
A
Alireza Nik
Department of Holistic Systems, SimulaMet, 0170 Oslo, Norway; Oslo Metropolitan University, 0176 Oslo, Norway
Vajira Thambawita
Vajira Thambawita
SimulaMet
GPGPU Parallel ComputingEmbedded SystemsMachine LearningDeep Learning
Bogdan Ionescu
Bogdan Ionescu
National University of Science and Technology Politehnica Bucharest & Academy of Romanian Scientists
machine learninginformation retrievalmultimedia
M
Michael Riegler
Cyber Security, Simula Research Laboratory, 0164 Oslo, Norway