SCE-LITE-HQ: Smooth visual counterfactual explanations with generative foundation models

๐Ÿ“… 2026-03-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work proposes a novel approach to visual counterfactual explanations by leveraging general-purpose pretrained generative foundation models, circumventing the need for task-specific generative architectures that are computationally expensive and difficult to scale to high-resolution data. By operating in the latent space of these foundation models, the method integrates smooth gradient-based optimization with a mask-guided diversity strategy to efficiently produce high-fidelity, structurally diverse counterfactual samples without any additional training. Evaluated on both natural images and medical imaging datasets, the approach achieves explanation quality comparable to or better than existing methods while substantially reducing computational and training overhead, thereby enhancing scalability and practical applicability.

Technology Category

Application Category

๐Ÿ“ Abstract
Modern neural networks achieve strong performance but remain difficult to interpret in high-dimensional visual domains. Counterfactual explanations (CFEs) provide a principled approach to interpreting black-box predictions by identifying minimal input changes that alter model outputs. However, existing CFE methods often rely on dataset-specific generative models and incur substantial computational cost, limiting their scalability to high-resolution data. We propose SCE-LITE-HQ, a scalable framework for counterfactual generation that leverages pretrained generative foundation models without task-specific retraining. The method operates in the latent space of the generator, incorporates smoothed gradients to improve optimization stability, and applies mask-based diversification to promote realistic and structurally diverse counterfactuals. We evaluate SCE-LITE-HQ on natural and medical datasets using a desiderata-driven evaluation protocol. Results show that SCE-LITE-HQ produces valid, realistic, and diverse counterfactuals competitive with or outperforming existing baselines, while avoiding the overhead of training dedicated generative models.
Problem

Research questions and friction points this paper is trying to address.

counterfactual explanations
high-dimensional visual domains
generative models
computational cost
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

counterfactual explanations
generative foundation models
smoothed gradients
latent space optimization
mask-based diversification
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Ahmed Zeid
Machine Learning Group, Technische Universitรคt Berlin, Berlin, Germany
Sidney Bender
Sidney Bender
Technical University of Berlin
Deep LearningExplainable AITrustworthy MLGenerative Modelling