๐ค AI Summary
This work addresses the problem of counterfactual image generation methods producing edits that violate intrinsic causal logic within images. To this end, we propose the first unified benchmark framework for systematically evaluating causal consistency and visual fidelityโwithout requiring ground-truth labels. Methodologically, we integrate structural causal models (SCMs) with hierarchical variational autoencoders (Hierarchical VAEs), establishing a multi-model, multi-dataset, and multi-causal-graph evaluation paradigm. We further introduce customized metrics, including causal consistency, to quantify alignment with underlying causal mechanisms. Experimental results demonstrate that Hierarchical VAEs significantly outperform GAN- and flow-based baselines on both natural and medical imaging domains, highlighting their generalizability across modalities. The framework is released as an open-source, extensible Python benchmark package, enabling community-wide reproducibility, validation, and extension.
๐ Abstract
Generative AI has revolutionised visual content editing, empowering users to effortlessly modify images and videos. However, not all edits are equal. To perform realistic edits in domains such as natural image or medical imaging, modifications must respect causal relationships inherent to the data generation process. Such image editing falls into the counterfactual image generation regime. Evaluating counterfactual image generation is substantially complex: not only it lacks observable ground truths, but also requires adherence to causal constraints. Although several counterfactual image generation methods and evaluation metrics exist, a comprehensive comparison within a unified setting is lacking. We present a comparison framework to thoroughly benchmark counterfactual image generation methods. We integrate all models that have been used for the task at hand and expand them to novel datasets and causal graphs, demonstrating the superiority of Hierarchical VAEs across most datasets and metrics. Our framework is implemented in a user-friendly Python package that can be extended to incorporate additional SCMs, causal methods, generative models, and datasets for the community to build on. Code: https://github.com/gulnazaki/counterfactual-benchmark.