Bridging Mechanistic Interpretability and Prompt Engineering with Gradient Ascent for Interpretable Persona Control

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively regulating emergent personality behaviors—such as sycophancy and hallucination—in large language models while preserving interpretability. The authors propose two gradient-based prompt optimization algorithms, RESGA and SAEGA, which jointly optimize prompt alignment with predefined personality directions and linguistic fluency from random initialization. By integrating mechanistic interpretability with automated prompt engineering, the approach generates prompts that are not only highly effective but also explicitly linked to the model’s internal representations, thereby overcoming the limitations of black-box or manually crafted prompts. Experiments on Llama 3.1, Qwen 2.5, and Gemma 3 demonstrate significant efficacy, with sycophancy control success rates improving from 49.90% to 79.24%.

Technology Category

Application Category

📝 Abstract
Controlling emergent behavioral personas (e.g., sycophancy, hallucination) in Large Language Models (LLMs) is critical for AI safety, yet remains a persistent challenge. Existing solutions face a dilemma: manual prompt engineering is intuitive but unscalable and imprecise, while automatic optimization methods are effective but operate as"black boxes"with no interpretable connection to model internals. We propose a novel framework that adapts gradient ascent to LLMs, enabling targeted prompt discovery. In specific, we propose two methods, RESGA and SAEGA, that both optimize randomly initialized prompts to achieve better aligned representation with an identified persona direction. We introduce fluent gradient ascent to control the fluency of discovered persona steering prompts. We demonstrate RESGA and SAEGA's effectiveness across Llama 3.1, Qwen 2.5, and Gemma 3 for steering three different personas,sycophancy, hallucination, and myopic reward. Crucially, on sycophancy, our automatically discovered prompts achieve significant improvement (49.90% compared with 79.24%). By grounding prompt discovery in mechanistically meaningful features, our method offers a new paradigm for controllable and interpretable behavior modification.
Problem

Research questions and friction points this paper is trying to address.

persona control
mechanistic interpretability
prompt engineering
large language models
AI safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

gradient ascent
mechanistic interpretability
prompt engineering
persona control
fluent optimization
🔎 Similar Papers
No similar papers found.
H
Harshvardhan Saini
Indian Institute of Technology
Y
Yiming Tang
National University of Singapore
Dianbo Liu
Dianbo Liu
Assistant professor, National University of Singapore
Push the limits of humanmachine learningbiomedical sciences