MedS$^3$: Towards Medical Small Language Models with Self-Evolved Slow Thinking

πŸ“… 2025-01-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing medical language models suffer from excessively long reasoning chains, privacy sensitivity, and high deployment costs in real-world clinical settings; moreover, their heavy reliance on large-model distillation compromises both reliability and practicality. To address these challenges, we propose a lightweight medical language model tailored for clinical long-chain reasoning, introducing the novel β€œself-evolving slow-thinking” paradigm: it employs Monte Carlo Tree Search (MCTS) to generate verifiable reasoning chains and jointly optimizes policy and reward models, enabling privacy-preserving, deployable test-time reasoning enhancement. Crucially, the model adopts a compact parameter architecture without distillation from large foundation models. Evaluated on 11 medical benchmark datasets, it achieves an average +2-point improvement over existing open-source models; integrating the reward model further boosts performance by ~13 points, surpassing GPT-4o-mini.

Technology Category

Application Category

πŸ“ Abstract
Medical language models (MLMs) have become pivotal in advancing medical natural language processing. However, prior models that rely on pre-training or supervised fine-tuning often exhibit low data efficiency and limited practicality in real-world clinical applications. While OpenAIs O1 highlights test-time scaling in mathematics, attempts to replicate this approach in medicine typically distill responses from GPT-series models to open-source models, focusing primarily on multiple-choice tasks. This strategy, though straightforward, neglects critical concerns like data privacy and realistic deployment in clinical settings. In this work, we present a deployable, small-scale medical language model, mone, designed for long-chain reasoning in clinical tasks using a self-evolution paradigm. Starting with a seed dataset of around 8,000 instances spanning five domains and 16 datasets, we prompt a base policy model to perform Monte Carlo Tree Search (MCTS) to construct verifiable reasoning chains. Each reasoning step is assigned an evolution rollout value, allowing verified trajectories to train the policy model and the reward model. During inference, the policy model generates multiple responses, and the reward model selects the one with the highest reward score. Experiments on eleven evaluation datasets demonstrate that mone outperforms prior open-source models by 2 points, with the addition of the reward model further boosting performance ($sim$13 points), surpassing GPT-4o-mini. Code and data are available at url{https://github.com/pixas/MedSSS}.
Problem

Research questions and friction points this paper is trying to address.

Medical Language Models
Privacy Protection
Complex Decision Making
Innovation

Methods, ideas, or system contributions that make the work stand out.

MedS$^3$
multi-step reasoning
privacy protection
πŸ”Ž Similar Papers
No similar papers found.