Improving LLM Reasoning through Interpretable Role-Playing Steering

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing role-playing reasoning in large language models heavily relies on prompt engineering, suffering from poor stability and opaque, uninterpretable mechanisms. This paper proposes Sparse Autoencoder Role-Playing Steering (SRPS), the first method to employ sparse autoencoders for disentangling internal neural features specifically associated with role-playing behavior. SRPS combines activation-pattern-driven feature selection with residual stream injection to enable fine-grained, interpretable, and intensity-controllable intervention in reasoning processes. By replacing black-box, unreliable prompt-based steering, SRPS achieves significant performance gains: on Llama3.1-8B, zero-shot chain-of-thought accuracy on CSQA improves by 7.94 percentage points (from 31.86% to 39.80%); on Gemma2-9B, accuracy on SVAMP rises by 7.6 percentage points (from 37.50% to 45.10%). The approach offers both mechanistic transparency and robust controllability, advancing principled, architecture-aware intervention in LLM reasoning.

Technology Category

Application Category

📝 Abstract
Role-playing has emerged as an effective technique for enhancing the reasoning capabilities of large language models (LLMs). However, existing methods primarily rely on prompt engineering, which often lacks stability and interpretability. In this paper, we introduce Sparse Autoencoder Role-Playing Steering (SRPS), a novel framework that identifies and manipulates internal model features associated with role-playing behavior. Our approach extracts latent representations from role-play prompts, selects the most relevant features based on activation patterns, and constructs a steering vector that can be injected into the model's residual stream with controllable intensity. Our method enables fine-grained control over role-specific behavior and offers insights into how role information influences internal model activations. Extensive experiments across various reasoning benchmarks and model sizes demonstrate consistent performance gains. Notably, in the zero-shot chain-of-thought (CoT) setting, the accuracy of Llama3.1-8B on CSQA improves from 31.86% to 39.80%, while Gemma2-9B on SVAMP increases from 37.50% to 45.10%. These results highlight the potential of SRPS to enhance reasoning ability in LLMs, providing better interpretability and stability compared to traditional prompt-based role-playing.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM reasoning via interpretable role-playing steering
Improving stability and interpretability in role-playing methods
Controlling role-specific behavior through internal feature manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Autoencoder extracts role-play features
Steering vector controls behavior intensity
Improves reasoning with interpretable activations
🔎 Similar Papers
No similar papers found.