🤖 AI Summary
Hematoxylin and eosin (H&E) stained slides lack cost-effective means to infer spatial protein distribution, necessitating immunohistochemistry (IHC) staining. Method: This paper proposes a progressive generative network that decouples modeling of tissue structure, color appearance, and cellular boundaries—overcoming fidelity bottlenecks in existing stain translation methods regarding structural preservation and diaminobenzidine (DAB) chromogen accuracy. Built upon the ASP framework, it integrates PatchNCE contrastive learning and introduces two novel losses: DAB concentration loss and image gradient loss, enabling stage-wise optimization of multi-modal visual features. Results: Evaluated on HER2 and ER datasets, the method significantly improves detail fidelity and photorealism of generated IHC-equivalent images. It achieves superior nuclear/membranous localization clarity, structural continuity, and staining specificity compared to state-of-the-art approaches, offering a high-cost-benefit alternative for computational pathology protein expression analysis.
📝 Abstract
Compared to hematoxylin-eosin (H&E) staining, immunohistochemistry (IHC) not only maintains the structural features of tissue samples, but also provides high-resolution protein localization, which is essential for aiding in pathology diagnosis. Despite its diagnostic value, IHC remains a costly and labor-intensive technique. Its limited scalability and constraints in multiplexing further hinder widespread adoption, especially in resource-limited settings. Consequently, researchers are increasingly exploring computational stain translation techniques to synthesize IHC-equivalent images from H&E-stained slides, aiming to extract protein-level information more efficiently and cost-effectively. However, most existing stain translation techniques rely on a linearly weighted summation of multiple loss terms within a single objective function, strategy that often overlooks the interdepedence among these components-resulting in suboptimal image quality and an inability to simultaneously preserve structural authenticity and color fidelity. To address this limitation, we propose a novel network architecture that follows a progressive structure, incorporating color and cell border generation logic, which enables each visual aspect to be optimized in a stage-wise and decoupled manner. To validate the effectiveness of our proposed network architecture, we build upon the Adaptive Supervised PatchNCE (ASP) framework as our baseline. We introduce additional loss functions based on 3,3'-diaminobenzidine (DAB) chromogen concentration and image gradient, enhancing color fidelity and cell boundary clarity in the generated IHC images. By reconstructing the generation pipeline using our structure-color-cell boundary progressive mechanism, experiments on HER2 and ER datasets demonstrated that the model significantly improved visual quality and achieved finer structural details.