🤖 AI Summary
This study investigates the root causes of undesirable attributes—such as toxicity, negative sentiment, and political bias—in generative AI outputs, with a focus on the role of input prompts. To this end, it introduces the first counterfactual explanation framework tailored for non-deterministic generative models, termed Prompt Counterfactual Explanations (PCEs). By integrating a downstream classifier-guided counterfactual generation algorithm with targeted prompt perturbation strategies, the framework enables prompt-centric interpretability analysis. This approach overcomes key limitations of traditional explainable AI (XAI) methods when applied to generative models. Empirical results across three tasks—political stance, toxicity, and sentiment—demonstrate the framework’s effectiveness in generating meaningful PCEs, substantially enhancing prompt engineering efficiency and red-teaming capabilities, thereby facilitating precise identification and mitigation of harmful model outputs.
📝 Abstract
As generative AI systems become integrated into real-world applications, organizations increasingly need to be able to understand and interpret their behavior. In particular, decision-makers need to understand what causes generative AI systems to exhibit specific output characteristics. Within this general topic, this paper examines a key question: what is it about the input -- the prompt -- that causes an LLM-based generative AI system to produce output that exhibits specific characteristics, such as toxicity, negative sentiment, or political bias. To examine this question, we adapt a common technique from the Explainable AI literature: counterfactual explanations. We explain why traditional counterfactual explanations cannot be applied directly to generative AI systems, due to several differences in how generative AI systems function. We then propose a flexible framework that adapts counterfactual explanations to non-deterministic, generative AI systems in scenarios where downstream classifiers can reveal key characteristics of their outputs. Based on this framework, we introduce an algorithm for generating prompt-counterfactual explanations (PCEs). Finally, we demonstrate the production of counterfactual explanations for generative AI systems with three case studies, examining different output characteristics (viz., political leaning, toxicity, and sentiment). The case studies further show that PCEs can streamline prompt engineering to suppress undesirable output characteristics and can enhance red-teaming efforts to uncover additional prompts that elicit undesirable outputs. Ultimately, this work lays a foundation for prompt-focused interpretability in generative AI: a capability that will become indispensable as these models are entrusted with higher-stakes tasks and subject to emerging regulatory requirements for transparency and accountability.