🤖 AI Summary
This work addresses two pervasive issues in large language models (LLMs)—hallucination (generating factually incorrect yet plausible content) and laziness (excessive refusal to answer)—during reasoning tasks. We propose the Automatic Curriculum Expert Iteration (ACEI) framework, which integrates expert iteration with strategy-neighborhood exploration, an adaptive, step-length-aware curriculum reward mechanism, and a capability-boundary-aligned reinforcement learning paradigm to enable real-time reasoning-path correction and optimal refusal timing. Our key contribution is the first joint modeling of dynamic reward curricula and expert iteration, achieving a novel balance between assertiveness and conservativeness in reasoning behavior. ACEI consistently outperforms state-of-the-art methods across logical reasoning, mathematical problem solving, and planning benchmarks, significantly improving answer faithfulness and response reasonableness. The implementation is publicly available.
📝 Abstract
Hallucinations (i.e., generating plausible but inaccurate content) and laziness (i.e. excessive refusals or defaulting to"I don't know") persist as major challenges in LLM reasoning. Current efforts to reduce hallucinations primarily focus on factual errors in knowledge-grounded tasks, often neglecting hallucinations related to faulty reasoning. Meanwhile, some approaches render LLMs overly conservative, limiting their problem-solving capabilities. To mitigate hallucination and laziness in reasoning tasks, we propose Automatic Curriculum Expert Iteration (Auto-CEI) to enhance LLM reasoning and align responses to the model's capabilities--assertively answering within its limits and declining when tasks exceed them. In our method, Expert Iteration explores the reasoning trajectories near the LLM policy, guiding incorrect paths back on track to reduce compounding errors and improve robustness; it also promotes appropriate"I don't know"responses after sufficient reasoning attempts. The curriculum automatically adjusts rewards, incentivizing extended reasoning before acknowledging incapability, thereby pushing the limits of LLM reasoning and aligning its behaviour with these limits. We compare Auto-CEI with various SOTA baselines across logical reasoning, mathematics, and planning tasks, where Auto-CEI achieves superior alignment by effectively balancing assertiveness and conservativeness. The code is available at https://github.com/SalesforceAIResearch/Auto-CEI .