Not All Queries Need Deep Thought: CoFiCot for Adaptive Coarse-to-fine Stateful Refinement

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes CoFiCot, a novel framework addressing the inefficiency of uniform computational resource allocation in large language models during reasoning, which often leads to over-correction on simple tasks and under-correction on complex ones. CoFiCot introduces a multi-metric difficulty classifier that dynamically assesses problem complexity by integrating semantic entropy, consensus reliability, and predictive inference. Based on this assessment, it applies differentiated refinement strategies: efficient aggregation for simple queries and a context-aware, stateful sequential correction loop for complex ones. Coupled with a process reward model, this approach balances fine-grained error localization with global logical consistency, effectively mitigating resource waste and performance bottlenecks while significantly enhancing both reasoning quality and efficiency across tasks of varying difficulty.

Technology Category

Application Category

πŸ“ Abstract
Scaling test-time computation enhances LLM reasoning ability but faces a uniform computation paradox. Allocating identical resources leads to over-correction on simple tasks and insufficient refinement on complex ones. To address this, we propose CoFiCot, a coarse-to-fine adaptive framework that dynamically tailors inference strategies to problem difficulty. Specifically, we implement a multi-metric classifier that triages queries by synthesizing semantic entropy, consensus reliability, and predicted reasoning depth . This enables a differentiated refinement stage that applies efficient aggregation for simple queries while routing complex ones to a context-aware correction loop . We formalize correction as a stateful sequential propagation process , where each repair is strictly conditioned on the verified history of prior rectifications. By integrating Process Reward Models (PRMs) within this state-dependent trajectory, CoFiCot effectively bridges the gap between granular error localization and global logical coherence, preventing the context fragmentation typical of stateless refinement methods.
Problem

Research questions and friction points this paper is trying to address.

test-time computation
uniform computation paradox
adaptive refinement
reasoning ability
stateful correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive refinement
stateful reasoning
Process Reward Models
semantic entropy
coarse-to-fine inference
πŸ”Ž Similar Papers
No similar papers found.