🤖 AI Summary
Large language models (LLMs) exhibit entangled concept representations, hindering precise, targeted intervention—especially for sensitive concepts such as weapons of mass destruction (WMD).
Method: This paper introduces RepIt, the first framework enabling fine-grained representation isolation and directional manipulation of such sensitive concepts. Leveraging activation steering, RepIt achieves high-precision decoupling of refusal behavior using only ~100 neurons and a small number of examples, with efficient extraction and intervention on a single A6000 GPU across multiple mainstream LLMs.
Contribution/Results: RepIt significantly outperforms prior methods in resource efficiency, stealth, and robustness. It selectively suppresses over-refusal on specific topics while preserving standard safety evaluation scores—demonstrating both localized controllability and compatibility with safety requirements.
📝 Abstract
While activation steering in large language models (LLMs) is a growing area of research, methods can often incur broader effects than desired. This motivates isolation of purer concept vectors to enable targeted interventions and understand LLM behavior at a more granular level. We present RepIt, a simple and data-efficient framework for isolating concept-specific representations. Across five frontier LLMs, RepIt enables precise interventions: it selectively suppresses refusal on targeted concepts while preserving refusal elsewhere, producing models that answer WMD-related questions while still scoring as safe on standard benchmarks. We further show that the corrective signal localizes to just 100-200 neurons and that robust target representations can be extracted from as few as a dozen examples on a single A6000. This efficiency raises a dual concern: manipulations can be performed with modest compute and data to extend to underrepresented data-scarce topics while evading existing benchmarks. By disentangling refusal vectors with RepIt, this work demonstrates that targeted interventions can counteract overgeneralization, laying the foundation for more granular control of model behavior.