Cross-modal ultra-scale learning with tri-modalities of renal biopsy images for glomerular multi-disease auxiliary diagnosis

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of feature fusion and limited classification performance arising from scale discrepancies among TEM (nanoscale), OM, and IM (micron-scale) multimodal renal biopsy images, this paper proposes a cross-modal hyperscale learning framework. It is the first to enable end-to-end joint modeling of nanoscale–micron-scale tri-modal histopathological images. We introduce a sparse multi-instance learning module and a novel cross-modal scale-aware attention mechanism to explicitly model scale heterogeneity and enhance pathological semantic interaction across modalities. Additionally, we incorporate a multi-loss weighted optimization strategy and a tri-modal feature alignment and fusion scheme. Evaluated on a custom-built dataset, our model achieves 95.37±2.41% accuracy, 99.05±0.53% AUC, and 95.32±2.41% F1-score—significantly outperforming state-of-the-art methods—and demonstrates robust generalizability to membranous nephropathy staging.

Technology Category

Application Category

📝 Abstract
Constructing a multi-modal automatic classification model based on three types of renal biopsy images can assist pathologists in glomerular multi-disease identification. However, the substantial scale difference between transmission electron microscopy (TEM) image features at the nanoscale and optical microscopy (OM) or immunofluorescence microscopy (IM) images at the microscale poses a challenge for existing multi-modal and multi-scale models in achieving effective feature fusion and improving classification accuracy. To address this issue, we propose a cross-modal ultra-scale learning network (CMUS-Net) for the auxiliary diagnosis of multiple glomerular diseases. CMUS-Net utilizes multiple ultrastructural information to bridge the scale difference between nanometer and micrometer images. Specifically, we introduce a sparse multi-instance learning module to aggregate features from TEM images. Furthermore, we design a cross-modal scale attention module to facilitate feature interaction, enhancing pathological semantic information. Finally, multiple loss functions are combined, allowing the model to weigh the importance among different modalities and achieve precise classification of glomerular diseases. Our method follows the conventional process of renal biopsy pathology diagnosis and, for the first time, performs automatic classification of multiple glomerular diseases including IgA nephropathy (IgAN), membranous nephropathy (MN), and lupus nephritis (LN) based on images from three modalities and two scales. On an in-house dataset, CMUS-Net achieves an ACC of 95.37+/-2.41%, an AUC of 99.05+/-0.53%, and an F1-score of 95.32+/-2.41%. Extensive experiments demonstrate that CMUS-Net outperforms other well-known multi-modal or multi-scale methods and show its generalization capability in staging MN. Code is available at https://github.com/SMU-GL-Group/MultiModal_lkx/tree/main.
Problem

Research questions and friction points this paper is trying to address.

Bridging nanoscale-microscale image differences for renal disease diagnosis
Fusing tri-modal biopsy images to improve glomerular disease classification
Automating multi-disease identification from ultra-scale pathological images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal ultra-scale learning network bridges nanometer and micrometer image scales
Sparse multi-instance learning aggregates TEM image features effectively
Cross-modal scale attention module enhances pathological semantic interaction
🔎 Similar Papers
No similar papers found.
K
Kaixing Long
School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
D
Danyi Weng
School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
Y
Yun Mi
School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
Z
Zhentai Zhang
School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
Y
Yanmeng Lu
Central Laboratory, Southern Medical University, Guangzhou 510515, China
Z
Zhitao Zhou
Central Laboratory, Southern Medical University, Guangzhou 510515, China
J
Jian Geng
Department of Pathology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, 510515, China; Guangzhou Huayin Medical Laboratory Center, Guangzhou, 510515, China
L
Liming Zhong
School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
Qianjin Feng
Qianjin Feng
School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
W
Wei Yang
School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
Lei Cao
Lei Cao
Assistant Professor, University of Arizona/Research Scientist, MIT CSAIL
DatabasesMachine learning