Latent Debate: A Surrogate Framework for Interpreting LLM Thinking

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The internal reasoning mechanisms of large language models (LLMs) and the origins of hallucination remain poorly understood. Method: We propose the “Latent Debate” framework—the first to explicitly model implicit pro-con argumentation within a single forward pass of a single LLM. Using a task-agnostic symbolic instantiation strategy, we decompose intermediate-layer representations in true/false prediction tasks. Contribution: We discover a significant negative correlation between latent debate intensity in intermediate layers and hallucination risk, offering a novel interpretability lens for LLMs. Furthermore, our constructed “reasoning agent” model faithfully reproduces the original model’s predictions while achieving high-accuracy hallucination detection—demonstrating both strong faithfulness and interpretability. This establishes a new baseline for faithful and explainable hallucination diagnosis.

Technology Category

Application Category

📝 Abstract
Understanding the internal thinking process of Large Language Models (LLMs) and the cause of hallucinations remains a key challenge. To this end, we introduce latent debate, a novel framework for interpreting model predictions through the lens of implicit internal arguments. Unlike the current work of self-consistency and multi-agent debate, which relies on explicit debates among multiple answers or multiple models, latent debate captures the hidden supporting and attacking signals that arise within a single model during a single inference. We first present a model- and task-agnostic conceptual framework, and then instantiate it symbolically to approximate the thinking process of LLMs on True/False prediction tasks. Empirical studies demonstrate that latent debate is a faithful structured surrogate model that has highly consistent predictions with the original LLM. Beyond interpretability, we demonstrate that latent debate provides a strong baseline for hallucination detection. Further analysis reveals strong correlations between hallucinations and debate patterns, such as a high degree of latent debates in the middle layers is linked to a higher risk of hallucinations. These findings position latent debate as a potential framework for understanding internal mechanisms of LLMs, especially for scenarios where internal (dis)agreements appear during the inference steps.
Problem

Research questions and friction points this paper is trying to address.

Interprets LLM internal thinking via implicit arguments
Detects hallucinations through hidden debate patterns
Provides surrogate model for single-model inference analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent debate framework interprets LLM internal arguments
Captures hidden supporting and attacking signals in single model
Provides structured surrogate for hallucination detection and analysis
🔎 Similar Papers
No similar papers found.