🤖 AI Summary
Large language models (LLMs) exhibit limited capability in structured and multi-step reasoning under in-context learning (ICL), particularly in explicitly distinguishing between deliberate reasoning and heuristic, non-analytical responses.
Method: We propose JointThinking, a novel ICL paradigm that explicitly models the structural differences between “thinking” and “no-thinking” reasoning modes. It employs parallel dual-path generation and dynamic fusion—without additional training—and leverages output inconsistency between the two paths to trigger conditional, on-demand secondary calibration, enabling single-pass dominant inference with targeted refinement.
Contribution/Results: JointThinking significantly outperforms few-shot chain-of-thought and majority voting across multiple reasoning benchmarks. In-distribution, it matches state-of-the-art fine-tuned methods; out-of-distribution, it demonstrates superior robustness and generalization. The approach is training-free, computationally efficient, and inherently adaptive to task complexity.
📝 Abstract
Reasoning large language models (RLLMs) have recently demonstrated remarkable capabilities through structured and multi-step reasoning. While prior research has primarily focused on improving their training and inference strategies, their potential for in-context learning (ICL) remains largely underexplored. To fill this gap, we propose Thinking with Nothinking Calibration (JointThinking), a new ICL paradigm that leverages the structured difference between two reasoning modes, i.e., Thinking and Nothinking, to improve reasoning accuracy. Specifically, our method prompts the model to generate two answers in parallel: one in Thinking mode and the other in Nothinking mode. A second round of Thinking is triggered only when the two initial responses are inconsistent, using a single prompt that incorporates the original question and both candidate answers. Since such disagreement occurs infrequently (e.g., only 6% in GSM8K), our method performs just one round of reasoning in most cases, resulting in minimal latency overhead. Extensive experiments across multiple reasoning benchmarks demonstrate that JointThinking significantly outperforms few-shot chain-of-thought (CoT) and majority voting with improved answer robustness. Moreover, It achieves comparable in-distribution performance to training-based SOTA method, while substantially outperforming on out-of-distribution tasks. We further conduct a systematic analysis of the calibration mechanism, showing that leveraging different reasoning modes consistently lowers the error rate and highlights the value of structural thinking diversity. Additionally, we observe that the performance gap between actual and ideal reasoning narrows as model size increases in the second round of thinking, indicating the strong scalability of our approach. Finally, we discuss current limitations and outline promising directions for future ICL research in RLLMs.