π€ AI Summary
This work addresses the limitation of existing histopathological feature editing methods, which often neglect intrinsic dependencies among features, leading to generated samples that deviate from authentic biological distributions. To overcome this, the authors propose a Manifold-Aware Diffusion (MAD) framework that explicitly models the interdependencies of histopathological features during conditional editingβa first in the field. MAD integrates a disentangled variational autoencoder with a conditional diffusion model and regularizes feature trajectories in the latent space to ensure edits remain confined to the true data manifold. Experimental results demonstrate that MAD significantly enhances the biological plausibility, interpretability, and editing fidelity of synthesized images while preserving nuclear structural consistency.
π Abstract
Pathomics is a recent approach that offers rich quantitative features beyond what black-box deep learning can provide, supporting more reproducible and explainable biomarkers in digital pathology. However, many derived features (e.g.,"second-order moment") remain difficult to interpret, especially across different clinical contexts, which limits their practical adoption. Conditional diffusion models show promise for explainability through feature editing, but they typically assume feature independence**--**an assumption violated by intrinsically correlated pathomics features. Consequently, editing one feature while fixing others can push the model off the biological manifold and produce unrealistic artifacts. To address this, we propose a Manifold-Aware Diffusion (MAD) framework for controllable and biologically plausible cell nuclei editing. Unlike existing approaches, our method regularizes feature trajectories within a disentangled latent space learned by a variational auto-encoder (VAE). This ensures that manipulating a target feature automatically adjusts correlated attributes to remain within the learned distribution of real cells. These optimized features then guide a conditional diffusion model to synthesize high-fidelity images. Experiments demonstrate that our approach is able to navigate the manifold of pathomics features when editing those features. The proposed method outperforms baseline methods in conditional feature editing while preserving structural coherence.