🤖 AI Summary
This work addresses poor interpretability, low fairness, and weak robustness in computer vision models caused by spurious correlations and biases. We propose CAVLI, a unified counterfactual reasoning framework that integrates semantic attribute intervention, adversarial counterfactual generation, and causal graph modeling (BiasConnect), while unifying TCAV/LIME attribution, TIBET bias assessment, and InterMit causal sensitivity control—supporting cross-task collaborative analysis for both discriminative and generative models. Our key contributions are: (i) fine-grained concept-level attribution; (ii) modular, retraining-free debiasing; and (iii) causal-graph-driven systematic auditing. Experiments demonstrate that CAVLI significantly mitigates bias in image classification and text-to-image generation tasks, preserving both predictive accuracy and semantic consistency. The framework provides a scalable, interpretable paradigm for evaluating and optimizing socially responsible AI systems.
📝 Abstract
Counterfactual reasoning -- the practice of asking ``what if'' by varying inputs and observing changes in model behavior -- has become central to interpretable and fair AI. This thesis develops frameworks that use counterfactuals to explain, audit, and mitigate bias in vision classifiers and generative models. By systematically altering semantically meaningful attributes while holding others fixed, these methods uncover spurious correlations, probe causal dependencies, and help build more robust systems.
The first part addresses vision classifiers. CAVLI integrates attribution (LIME) with concept-level analysis (TCAV) to quantify how strongly decisions rely on human-interpretable concepts. With localized heatmaps and a Concept Dependency Score, CAVLI shows when models depend on irrelevant cues like backgrounds. Extending this, ASAC introduces adversarial counterfactuals that perturb protected attributes while preserving semantics. Through curriculum learning, ASAC fine-tunes biased models for improved fairness and accuracy while avoiding stereotype-laden artifacts.
The second part targets generative Text-to-Image (TTI) models. TIBET provides a scalable pipeline for evaluating prompt-sensitive biases by varying identity-related terms, enabling causal auditing of how race, gender, and age affect image generation. To capture interactions, BiasConnect builds causal graphs diagnosing intersectional biases. Finally, InterMit offers a modular, training-free algorithm that mitigates intersectional bias via causal sensitivity scores and user-defined fairness goals.
Together, these contributions show counterfactuals as a unifying lens for interpretability, fairness, and causality in both discriminative and generative models, establishing principled, scalable methods for socially responsible bias evaluation and mitigation.