π€ AI Summary
This work addresses the limitations of existing medical reasoning models, which are constrained by parametric knowledge and prone to forgetting and hallucination, as well as the suboptimal transfer performance of general-purpose DeepResearch models in clinical settings due to their inability to effectively leverage retrieved evidence within clinical context. To bridge this gap, the authors propose a multi-hop retrieval-augmented question answering framework tailored for the medical domain. Their approach introduces a difficulty-aware turn-penalty mechanism to optimize tool invocation and incorporates a step-constrained hypothesis verification reasoning framework with a monitoring mechanism to prevent context contamination. Evaluated across seven medical benchmarks, the method improves the base modelβs performance by 9.79% on average, outperforming both larger-scale medical reasoning models and general DeepResearch systems, thereby effectively narrowing the divide between general research agents and specialized medical reasoning.
π Abstract
Medical reasoning models remain constrained by parametric knowledge and are thus susceptible to forgetting and hallucinations. DeepResearch (DR) models ground outputs in verifiable evidence from tools and perform strongly in general domains, but their direct transfer to medical field yields relatively limited gains. We attribute this to two gaps: task characteristic and tool-use scaling. Medical questions require evidence interpretation in a knowledge-intensive clinical context; while general DR models can retrieve information, they often lack clinical-context reasoning and thus"find it but fail to use it,"leaving performance limited by medical abilities. Moreover, in medical scenarios, blindly scaling tool-call can inject noisy context, derailing sensitive medical reasoning and prompting repetitive evidence-seeking along incorrect paths. Therefore, we propose DeepMed. For data, we deploy a multi-hop med-search QA synthesis method supporting the model to apply the DR paradigm in medical contexts. For training, we introduce a difficulty-aware turn-penalty to suppress excessive tool-call growth. For inference, we bring a monitor to help validate hypotheses within a controlled number of steps and avoid context rot. Overall, on seven medical benchmarks, DeepMed improves its base model by 9.79\% on average and outperforms larger medical reasoning and DR models.