🤖 AI Summary
Placental whole-slide image (WSI) classification faces two key challenges: (1) inefficient patch selection strategies that struggle to balance diagnostic performance and computational cost; and (2) loss of global histopathological context due to patch-level modeling. To address these, we propose a two-stage efficient patch selection module coupled with an adaptive graph learning–driven hybrid multimodal fusion mechanism, jointly integrating visual features, learned graph-structured tissue relationships, and clinical text reports for end-to-end modeling of critical pathological semantics. Our method unifies parameter-free image compression, learnable patch filtering, graph neural networks, and vision-language modeling, optimized end-to-end under patient-level supervision. Evaluated on our proprietary dataset and two public placental WSI benchmarks, our approach achieves state-of-the-art classification accuracy while significantly reducing computational overhead—effectively mitigating both global contextual deficiency and scalability bottlenecks in large-scale WSI analysis.
📝 Abstract
Accurate prediction of placental diseases via whole slide images (WSIs) is critical for preventing severe maternal and fetal complications. However, WSI analysis presents significant computational challenges due to the massive data volume. Existing WSI classification methods encounter critical limitations: (1) inadequate patch selection strategies that either compromise performance or fail to sufficiently reduce computational demands, and (2) the loss of global histological context resulting from patch-level processing approaches. To address these challenges, we propose an Efficient multimodal framework for Patient-level placental disease Diagnosis, named EmmPD. Our approach introduces a two-stage patch selection module that combines parameter-free and learnable compression strategies, optimally balancing computational efficiency with critical feature preservation. Additionally, we develop a hybrid multimodal fusion module that leverages adaptive graph learning to enhance pathological feature representation and incorporates textual medical reports to enrich global contextual understanding. Extensive experiments conducted on both a self-constructed patient-level Placental dataset and two public datasets demonstrating that our method achieves state-of-the-art diagnostic performance. The code is available at https://github.com/ECNU-MultiDimLab/EmmPD.