🤖 AI Summary
This work proposes a lightweight, open-source general-purpose biomedical vision-language model designed for efficient local deployment under strict patient privacy and Protected Health Information (PHI) compliance requirements—addressing the limitations of existing high-performance multimodal systems that are either closed-source or computationally prohibitive. Built upon a GPT-oss language backbone and a vision frontend, the model leverages a three-stage domain-adaptive training strategy, high-quality data curation, and long-context multimodal alignment to achieve strong performance on consumer-grade GPUs. It outperforms larger open-source medical models on both out-of-distribution multimodal reasoning and complex text-only clinical tasks. The authors release the full training recipe, model weights, and evaluation toolkit to support reproducibility and community adoption.
📝 Abstract
Biomedical multimodal assistants have the potential to unify radiology, pathology, and clinical-text reasoning, yet a critical deployment gap remains: top-performing systems are either closed-source or computationally prohibitive, precluding the on-premises deployment required for patient privacy and PHI compliance. We introduce MEDGPT-OSS, an open-weight, 20B-parameter generalist vision-language model designed to facilitate open research in clinical AI. Rather than relying on architectural complexity, MEDGPT-OSS pairs the GPT-oss language backbone with a visual front-end via a optimized, three-stage training curriculum. By progressively domain-adapting these modules through rigorous data curation and long-context multimodal alignment, we demonstrate that a 20B model can bridge the capacity gap. It successfully outperforms larger open medical models on out-of-distribution (OOD) multimodal reasoning and complex text-only clinical tasks. By unifying diverse modalities under a single instruction-following interface, MEDGPT-OSS maintains a parameter-efficient footprint fully compatible with commodity GPUs. We release the complete training recipe, open-weight checkpoints, and a rigorous evaluation harness to serve as a verifiable foundation for privacy-preserving, institution-specific clinical AI research.