🤖 AI Summary
To address the challenge of feature fusion and limited classification performance arising from scale discrepancies among TEM (nanoscale), OM, and IM (micron-scale) multimodal renal biopsy images, this paper proposes a cross-modal hyperscale learning framework. It is the first to enable end-to-end joint modeling of nanoscale–micron-scale tri-modal histopathological images. We introduce a sparse multi-instance learning module and a novel cross-modal scale-aware attention mechanism to explicitly model scale heterogeneity and enhance pathological semantic interaction across modalities. Additionally, we incorporate a multi-loss weighted optimization strategy and a tri-modal feature alignment and fusion scheme. Evaluated on a custom-built dataset, our model achieves 95.37±2.41% accuracy, 99.05±0.53% AUC, and 95.32±2.41% F1-score—significantly outperforming state-of-the-art methods—and demonstrates robust generalizability to membranous nephropathy staging.
📝 Abstract
Constructing a multi-modal automatic classification model based on three types of renal biopsy images can assist pathologists in glomerular multi-disease identification. However, the substantial scale difference between transmission electron microscopy (TEM) image features at the nanoscale and optical microscopy (OM) or immunofluorescence microscopy (IM) images at the microscale poses a challenge for existing multi-modal and multi-scale models in achieving effective feature fusion and improving classification accuracy. To address this issue, we propose a cross-modal ultra-scale learning network (CMUS-Net) for the auxiliary diagnosis of multiple glomerular diseases. CMUS-Net utilizes multiple ultrastructural information to bridge the scale difference between nanometer and micrometer images. Specifically, we introduce a sparse multi-instance learning module to aggregate features from TEM images. Furthermore, we design a cross-modal scale attention module to facilitate feature interaction, enhancing pathological semantic information. Finally, multiple loss functions are combined, allowing the model to weigh the importance among different modalities and achieve precise classification of glomerular diseases. Our method follows the conventional process of renal biopsy pathology diagnosis and, for the first time, performs automatic classification of multiple glomerular diseases including IgA nephropathy (IgAN), membranous nephropathy (MN), and lupus nephritis (LN) based on images from three modalities and two scales. On an in-house dataset, CMUS-Net achieves an ACC of 95.37+/-2.41%, an AUC of 99.05+/-0.53%, and an F1-score of 95.32+/-2.41%. Extensive experiments demonstrate that CMUS-Net outperforms other well-known multi-modal or multi-scale methods and show its generalization capability in staging MN. Code is available at https://github.com/SMU-GL-Group/MultiModal_lkx/tree/main.