🤖 AI Summary
Existing deep learning robustness testing methods—such as data augmentation and adversarial example generation—struggle to simultaneously preserve image realism and semantic diversity. To address this, we propose the first LLM-driven multimodal counterfactual image generation framework: (1) CLIP encodes input images into textual embeddings; (2) a large language model generates semantically plausible and interpretable counterfactual descriptions; and (3) a ControlNet-conditioned diffusion model reconstructs high-fidelity images from these descriptions. Our approach unifies semantic editability with spatial consistency, enabling targeted vulnerability discovery for both classification and segmentation tasks. Evaluated on ImageNet-1K and SHIFT, generated images achieve high realism and diversity, with human evaluators reporting 92% consistency. Models fine-tuned on our augmented data demonstrate significantly improved robustness against semantic perturbations.
📝 Abstract
Ensuring the robustness of deep learning models requires comprehensive and diverse testing. Existing approaches, often based on simple data augmentation techniques or generative adversarial networks, are limited in producing realistic and varied test cases. To address these limitations, we present a novel framework for testing vision neural networks that leverages Large Language Models and control-conditioned Diffusion Models to generate synthetic, high-fidelity test cases. Our approach begins by translating images into detailed textual descriptions using a captioning model, allowing the language model to identify modifiable aspects of the image and generate counterfactual descriptions. These descriptions are then used to produce new test images through a text-to-image diffusion process that preserves spatial consistency and maintains the critical elements of the scene. We demonstrate the effectiveness of our method using two datasets: ImageNet1K for image classification and SHIFT for semantic segmentation in autonomous driving. The results show that our approach can generate significant test cases that reveal weaknesses and improve the robustness of the model through targeted retraining. We conducted a human assessment using Mechanical Turk to validate the generated images. The responses from the participants confirmed, with high agreement among the voters, that our approach produces valid and realistic images.