Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM inference methods—such as Best-of-N, majority voting, and self-reflection—employ fixed reasoning depth across all inputs, ignoring inherent variations in problem complexity and leading to inflexible computational resource allocation. This work proposes a training-free, model-agnostic, score-level reasoning framework that dynamically adjusts reasoning depth per input via a tunably scaled latent-space steering vector. It introduces the first continuous, controllable reasoning intensity mechanism—overcoming the limitations of discrete prompt-based control—and unifies broad (e.g., ensemble) and deep (e.g., chain-of-thought refinement) test-time scaling paradigms. The method comprises zero-shot steering vector extraction, disentanglement of reasoning-path features, and intensity interpolation. Evaluated on GSM8K, MATH500, and GPQA, it consistently improves accuracy across multiple LLMs, synergistically combining the strengths of Best-of-N sampling and self-reflection while requiring no fine-tuning.

Technology Category

Application Category

📝 Abstract
Test-time compute has emerged as a powerful paradigm for improving the performance of large language models (LLMs), where generating multiple outputs or refining individual chains can significantly boost answer accuracy. However, existing methods like Best-of-N, majority voting, and self-reflection typically apply reasoning in a uniform way across inputs, overlooking the fact that different problems may require different levels of reasoning depth. In this work, we propose Fractional Reasoning, a training-free and model-agnostic framework that enables continuous control over reasoning intensity at inference time, going beyond the limitations of fixed instructional prompts. Our method operates by extracting the latent steering vector associated with deeper reasoning and reapplying it with a tunable scaling factor, allowing the model to tailor its reasoning process to the complexity of each input. This supports two key modes of test-time scaling: (1) improving output quality in breadth-based strategies (e.g., Best-of-N, majority voting), and (2) enhancing the correctness of individual reasoning chains in depth-based strategies (e.g., self-reflection). Experiments on GSM8K, MATH500, and GPQA demonstrate that Fractional Reasoning consistently improves performance across diverse reasoning tasks and models.
Problem

Research questions and friction points this paper is trying to address.

Enables continuous control over reasoning intensity during inference
Tailors reasoning depth to input complexity for better accuracy
Improves performance in both breadth-based and depth-based strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses latent steering vectors for reasoning control
Applies tunable scaling to adjust reasoning depth
Enhances both breadth and depth-based strategies
🔎 Similar Papers
No similar papers found.