🤖 AI Summary
This work addresses the susceptibility of large language models to superficial cues and erroneous assumptions in long-context reasoning, which often leads to unreliable plans that are difficult to correct. To mitigate this issue, the authors propose an active planning framework that proactively identifies potential logical pitfalls and faulty premises prior to plan generation, explicitly modeling them as negative constraints. These constraints are then integrated into the planning process to guide conditional plan generation away from identified risks. By pioneering the upfront incorporation of negative constraints into planning, the approach significantly enhances reasoning reliability. Experimental results demonstrate that the method outperforms existing plan-and-execute paradigms and direct prompting strategies across multiple long-context question-answering benchmarks, confirming its effectiveness and robustness.
📝 Abstract
Large language models (LLMs) struggle with reasoning over long contexts where relevant information is sparsely distributed. Although plan-and-execute frameworks mitigate this by decomposing tasks into planning and execution, their effectiveness is often limited by unreliable plan generation due to dependence on surface-level cues. Consequently, plans may be based on incorrect assumptions, and once a plan is formed, identifying what went wrong and revising it reliably becomes difficult, limiting the effectiveness of reactive refinement. To address this limitation, we propose PPA-Plan, a proactive planning strategy for long-context reasoning that focuses on preventing such failures before plan generation. PPA-Plan identifies potential logical pitfalls and false assumptions, formulates them as negative constraints, and conditions plan generation on explicitly avoiding these constraints. Experiments on long-context QA benchmarks show that executing plans generated by PPA-Plan consistently outperforms existing plan-and-execute methods and direct prompting.