🤖 AI Summary
Current pathology vision-language models (VLMs) face three key bottlenecks: (1) localized region modeling limits whole-slide image (WSI) understanding; (2) non-public training data hinders reproducibility; and (3) a lack of WSI–fine-grained clinical report instruction pairs impedes clinical alignment. To address these, we propose an open, WSI-centric vision-language modeling paradigm. First, we introduce Polysome—the first whole-slide-level instruction generation tool. Second, we release HISTAI-Instruct, the first large-scale, publicly available WSI instruction dataset comprising 24,259 slides and over 1.1 million instruction-response pairs. Third, we develop ANTONI-α, a scalable VLM integrating a whole-slide visual encoder with a multi-stage instruction-tuning mechanism, supporting WSI-level visual question answering—including tissue identification, tumor detection, and differential diagnosis. ANTONI-α outperforms MedGemma across multiple benchmarks. All code, data, and models are fully open-sourced.
📝 Abstract
Vision-language models (VLMs) have the potential to become co-pilots for pathologists. However, most VLMs either focus on small regions of interest within whole-slide images, provide only static slide-level outputs, or rely on data that is not publicly available, limiting reproducibility. Furthermore, training data containing WSIs paired with detailed clinical reports is scarce, restricting progress toward transparent and generalisable VLMs. We address these limitations with three main contributions. First, we introduce Polysome, a standardised tool for synthetic instruction generation. Second, we apply Polysome to the public HISTAI dataset, generating HISTAI-Instruct, a large whole-slide instruction tuning dataset spanning 24,259 slides and over 1.1 million instruction-response pairs. Finally, we use HISTAI-Instruct to train ANTONI-α, a VLM capable of visual-question answering (VQA). We show that ANTONI-α outperforms MedGemma on WSI-level VQA tasks of tissue identification, neoplasm detection, and differential diagnosis. We also compare the performance of multiple incarnations of ANTONI-α trained with different amounts of data. All methods, data, and code are publicly available.