🤖 AI Summary
Multimodal analysis of tumor pathology faces challenges due to high heterogeneity between histopathological images and transcriptomic data, making it difficult to simultaneously achieve cross-modal alignment and modality-specific feature preservation. To address this, we propose an “Alignment–Preservation” co-optimization framework. First, we introduce a differentiable style clustering module to discover disease-informative, cross-modally consistent pathological signatures. Second, we design a dual-encoder architecture integrating a contrastive learning–driven modality alignment module and a modality preservation module enforced by orthogonality constraints—thereby jointly optimizing inter-modal correlation and intra-modal specificity. Evaluated on the TCGA pan-cancer cohort, our method achieves significant improvements: +4.2% in molecular subtype classification accuracy and +0.07 in survival risk stratification AUC. The resulting multimodal pathological representations are highly discriminative and interpretable.
📝 Abstract
Histopathology and transcriptomics are fundamental modalities in oncology, encapsulating the morphological and molecular aspects of the disease. Multi-modal self-supervised learning has demonstrated remarkable potential in learning pathological representations by integrating diverse data sources. Conventional multi-modal integration methods primarily emphasize modality alignment, while paying insufficient attention to retaining the modality-specific structures. However, unlike conventional scenarios where multi-modal inputs share highly overlapping features, histopathology and transcriptomics exhibit pronounced heterogeneity, offering orthogonal yet complementary insights. Histopathology provides morphological and spatial context, elucidating tissue architecture and cellular topology, whereas transcriptomics delineates molecular signatures through gene expression patterns. This inherent disparity introduces a major challenge in aligning them while maintaining modality-specific fidelity. To address these challenges, we present MIRROR, a novel multi-modal representation learning method designed to foster both modality alignment and retention. MIRROR employs dedicated encoders to extract comprehensive features for each modality, which is further complemented by a modality alignment module to achieve seamless integration between phenotype patterns and molecular profiles. Furthermore, a modality retention module safeguards unique attributes from each modality, while a style clustering module mitigates redundancy and enhances disease-relevant information by modeling and aligning consistent pathological signatures within a clustering space. Extensive evaluations on TCGA cohorts for cancer subtyping and survival analysis highlight MIRROR's superior performance, demonstrating its effectiveness in constructing comprehensive oncological feature representations and benefiting the cancer diagnosis.