YaPO: Learnable Sparse Activation Steering Vectors for Domain Adaptation

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of fine-grained and stable model control hindered by entangled latent factors arising from the polysemantic nature of densely activated neurons. To overcome this, we propose YaPO, a novel method that introduces learnable sparse activation intervention vectors to disentangle semantics in the latent space of sparse autoencoders, enabling efficient and interpretable behavioral steering. Integrated within a direct preference optimization framework, YaPO leverages preference data without requiring a reference policy. Experiments demonstrate that YaPO significantly outperforms dense baselines across multiple tasks—including cultural alignment, hallucination suppression, and jailbreak defense—exhibiting faster convergence, improved training stability, and no performance degradation on the MMLU benchmark.

Technology Category

Application Category

📝 Abstract
Steering Large Language Models (LLMs) through activation interventions has emerged as a lightweight alternative to fine-tuning for alignment and personalization. Recent work on Bi-directional Preference Optimization (BiPO) shows that dense steering vectors can be learned directly from preference data in a Direct Preference Optimization (DPO) fashion, enabling control over truthfulness, hallucinations, and safety behaviors. However, dense steering vectors often entangle multiple latent factors due to neuron multi-semanticity, limiting their effectiveness and stability in fine-grained settings such as cultural alignment, where closely related values and behaviors (e.g., among Middle Eastern cultures) must be distinguished. In this paper, we propose Yet another Policy Optimization (YaPO), a \textit{reference-free} method that learns \textit{sparse steering vectors} in the latent space of a Sparse Autoencoder (SAE). By optimizing sparse codes, YaPO produces disentangled, interpretable, and efficient steering directions. Empirically, we show that YaPO converges faster, achieves stronger performance, and exhibits improved training stability compared to dense steering baselines. Beyond cultural alignment, YaPO generalizes to a range of alignment-related behaviors, including hallucination, wealth-seeking, jailbreak, and power-seeking. Importantly, YaPO preserves general knowledge, with no measurable degradation on MMLU. Overall, our results show that YaPO provides a general recipe for efficient, stable, and fine-grained alignment of LLMs, with broad applications to controllability and domain adaptation. The associated code and data are publicly available\footnote{https://github.com/MBZUAI-Paris/YaPO}.
Problem

Research questions and friction points this paper is trying to address.

domain adaptation
sparse activation
steering vectors
fine-grained alignment
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

sparse steering vectors
Sparse Autoencoder
domain adaptation
LLM alignment
reference-free optimization
🔎 Similar Papers
No similar papers found.