🤖 AI Summary
To address the challenges of poor interpretability of prompt sensitivity, difficulty in bias control, and vulnerability to jailbreaking in large language models (LLMs), this paper proposes ConceptX, a concept-level interpretability framework. Methodologically, ConceptX introduces a semantic concept-driven attribution paradigm—departing from conventional token-level explanations—and enables simultaneous response auditing and multi-objective controllable steering (e.g., gender bias mitigation, jailbreak defense) via fine-tuning-free concept masking and rewriting. It further incorporates semantic similarity-weighted attribution and a joint faithfulness–human-aligned evaluation framework. Experiments across three mainstream LLMs demonstrate that ConceptX significantly outperforms baselines such as TokenSHAP: sentiment control effectiveness improves by 0.252 (vs. 0.131 for random editing), jailbreak attack success rate drops from 0.463 to 0.242, and attribution fidelity and human interpretability are substantially enhanced.
📝 Abstract
As large language models (LLMs) become widely deployed, concerns about their safety and alignment grow. An approach to steer LLM behavior, such as mitigating biases or defending against jailbreaks, is to identify which parts of a prompt influence specific aspects of the model's output. Token-level attribution methods offer a promising solution, but still struggle in text generation, explaining the presence of each token in the output separately, rather than the underlying semantics of the entire LLM response. We introduce ConceptX, a model-agnostic, concept-level explainability method that identifies the concepts, i.e., semantically rich tokens in the prompt, and assigns them importance based on the outputs' semantic similarity. Unlike current token-level methods, ConceptX also offers to preserve context integrity through in-place token replacements and supports flexible explanation goals, e.g., gender bias. ConceptX enables both auditing, by uncovering sources of bias, and steering, by modifying prompts to shift the sentiment or reduce the harmfulness of LLM responses, without requiring retraining. Across three LLMs, ConceptX outperforms token-level methods like TokenSHAP in both faithfulness and human alignment. Steering tasks boost sentiment shift by 0.252 versus 0.131 for random edits and lower attack success rates from 0.463 to 0.242, outperforming attribution and paraphrasing baselines. While prompt engineering and self-explaining methods sometimes yield safer responses, ConceptX offers a transparent and faithful alternative for improving LLM safety and alignment, demonstrating the practical value of attribution-based explainability in guiding LLM behavior.