DRESSing Up LLM: Efficient Stylized Question-Answering via Style Subspace Editing

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of efficiently and lightweightly generating stylistically controlled text with large language models (LLMs) in applications such as role-playing, this paper proposes a training-free latent-space representation editing method. The core method explicitly models a disentangled stylistic subspace within intermediate-layer hidden representations of LLMs—identified via singular value decomposition and directional projection—and introduces an adaptive strength-weighting mechanism that dynamically controls the degree of style editing during zero-shot inference, balancing style fidelity and semantic consistency. Evaluated on two newly constructed stylized question-answering benchmarks, the approach achieves a 23.6% improvement in style accuracy over prompt engineering and ITI baselines, while maintaining 98.1% semantic consistency—all without fine-tuning or additional parameters.

Technology Category

Application Category

📝 Abstract
We introduce DRESS, a novel approach for generating stylized large language model (LLM) responses through representation editing. Existing methods like prompting and fine-tuning are either insufficient for complex style adaptation or computationally expensive, particularly in tasks like NPC creation or character role-playing. Our approach leverages the over-parameterized nature of LLMs to disentangle a style-relevant subspace within the model's representation space to conduct representation editing, ensuring a minimal impact on the original semantics. By applying adaptive editing strengths, we dynamically adjust the steering vectors in the style subspace to maintain both stylistic fidelity and semantic integrity. We develop two stylized QA benchmark datasets to validate the effectiveness of DRESS, and the results demonstrate significant improvements compared to baseline methods such as prompting and ITI. In short, DRESS is a lightweight, train-free solution for enhancing LLMs with flexible and effective style control, making it particularly useful for developing stylized conversational agents. Codes and benchmark datasets are available at https://github.com/ArthurLeoM/DRESS-LLM.
Problem

Research questions and friction points this paper is trying to address.

Language Models
Stylistic Transfer
Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

DRESS
style transfer
large language models
🔎 Similar Papers
No similar papers found.
X
Xinyu Ma
School of Computer Science, Peking University
Y
Yifeng Xu
School of Computer Science, Peking University
Y
Yang Lin
School of Computer Science, Peking University
Tianlong Wang
Tianlong Wang
Peking University
LLM reasoningRepresentation editing
X
Xu Chu
School of Computer Science, Peking University; Center on Frontiers of Computing Studies, Peking University; National Research and Engineering Center of Software Engineering, Peking University
X
Xin Gao
School of Computer Science, Peking University
Junfeng Zhao
Junfeng Zhao
Assistant Professor at Arizona State University, Director of BELIV Lab
Connected & Automated VehicleMotion Planning & ControlsElectric VehiclesAI/ML
Y
Yasha Wang
School of Computer Science, Peking University; National Research and Engineering Center of Software Engineering, Peking University