🤖 AI Summary
Current foundational models struggle to effectively process gigapixel whole-slide images (WSIs) and integrate multi-source molecular data, limiting computational pathology’s performance in diagnosis and prognostic prediction. To address this, we introduce Threads—the first molecular-driven foundation model for pathology—enabling end-to-end joint modeling of H&E-stained WSIs with genomic and transcriptomic data. Built upon the largest multimodal pretraining dataset to date (47,171 paired samples), Threads incorporates sliding-window adaptive encoding, cross-scale feature alignment, and gene–image joint embedding learning. Evaluated across 54 oncology tasks, Threads consistently outperforms state-of-the-art baselines, markedly improving label efficiency and generalization. Notably, it achieves breakthrough performance on clinically critical tasks—including somatic mutation prediction, immunohistochemistry interpretation, and survival analysis—demonstrating unprecedented integration of histomorphological and molecular information for precision pathology.
📝 Abstract
Foundation models are reshaping computational pathology by enabling transfer learning, where models pre-trained on vast datasets can be adapted for downstream diagnostic, prognostic, and therapeutic response tasks. Despite these advances, foundation models are still limited in their ability to encode the entire gigapixel whole-slide images without additional training and often lack complementary multimodal data. Here, we introduce Threads, a slide-level foundation model capable of generating universal representations of whole-slide images of any size. Threads was pre-trained using a multimodal learning approach on a diverse cohort of 47,171 hematoxylin and eosin (H&E)-stained tissue sections, paired with corresponding genomic and transcriptomic profiles - the largest such paired dataset to be used for foundation model development to date. This unique training paradigm enables Threads to capture the tissue's underlying molecular composition, yielding powerful representations applicable to a wide array of downstream tasks. In extensive benchmarking across 54 oncology tasks, including clinical subtyping, grading, mutation prediction, immunohistochemistry status determination, treatment response prediction, and survival prediction, Threads outperformed all baselines while demonstrating remarkable generalizability and label efficiency. It is particularly well suited for predicting rare events, further emphasizing its clinical utility. We intend to make the model publicly available for the broader community.