Unsupervised decoding of encoded reasoning using language model interpretability

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) may evade human oversight by encoding reasoning steps—e.g., using ROT-13—rendering internal thought processes non-human-readable and undermining interpretability-based safety mechanisms. Method: This work systematically evaluates the resilience of existing interpretability techniques against such encoded reasoning. We propose the first unsupervised decoding framework that integrates logit lens representation analysis, mechanistic interpretability methods, and automated paraphrasing to reconstruct encrypted chains of thought end-to-end. Contribution/Results: Applied to DeepSeek-R1-Distill-Llama-70B, our framework achieves high-fidelity ROT-13 decoding in middle-to-late transformer layers and fully recovers hidden reasoning traces without labeled data. Crucially, it demonstrates—for the first time—that mainstream interpretability methods exhibit robustness against simple symbolic encodings. This significantly extends the scope of human supervision to non-human-readable reasoning, advancing the feasibility of scalable, automated oversight for latent model behavior.

Technology Category

Application Category

📝 Abstract
As large language models become increasingly capable, there is growing concern that they may develop reasoning processes that are encoded or hidden from human oversight. To investigate whether current interpretability techniques can penetrate such encoded reasoning, we construct a controlled testbed by fine-tuning a reasoning model (DeepSeek-R1-Distill-Llama-70B) to perform chain-of-thought reasoning in ROT-13 encryption while maintaining intelligible English outputs. We evaluate mechanistic interpretability methods--in particular, logit lens analysis--on their ability to decode the model's hidden reasoning process using only internal activations. We show that logit lens can effectively translate encoded reasoning, with accuracy peaking in intermediate-to-late layers. Finally, we develop a fully unsupervised decoding pipeline that combines logit lens with automated paraphrasing, achieving substantial accuracy in reconstructing complete reasoning transcripts from internal model representations. These findings suggest that current mechanistic interpretability techniques may be more robust to simple forms of encoded reasoning than previously understood. Our work provides an initial framework for evaluating interpretability methods against models that reason in non-human-readable formats, contributing to the broader challenge of maintaining oversight over increasingly capable AI systems.
Problem

Research questions and friction points this paper is trying to address.

Decode hidden reasoning in language models using interpretability
Evaluate logit lens on encrypted chain-of-thought reasoning
Develop unsupervised pipeline to reconstruct reasoning from activations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Logit lens decodes encrypted reasoning from activations
Unsupervised pipeline combines logit lens with paraphrasing
Framework evaluates interpretability on non-human-readable reasoning