Learning to Plan Before Answering: Self-Teaching LLMs to Learn Abstract Plans for Problem Solving

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM self-training paradigms tend to generate detail-oriented synthetic data, hindering the extraction of generalizable, abstract meta-knowledge. Method: This paper proposes LEPA—the first approach to incorporate high-level abstract planning from cognitive science into LLM post-training. LEPA prompts the model to first generate prospective, task-agnostic abstract plans, then solve problems grounded in those plans; joint plan–solution modeling and self-reflective refinement enable co-optimization. The method employs a two-stage self-generated data training framework. Contribution/Results: On multiple natural language inference benchmarks, LEPA significantly outperforms state-of-the-art self-training methods, consistently improving both generalization and reasoning robustness. Empirical results validate that abstract planning critically enhances LLMs’ capacity for meta-knowledge transfer.

Technology Category

Application Category

📝 Abstract
In the field of large language model (LLM) post-training, the effectiveness of utilizing synthetic data generated by the LLM itself has been well-presented. However, a key question remains unaddressed: what essential information should such self-generated data encapsulate? Existing approaches only produce step-by-step problem solutions, and fail to capture the abstract meta-knowledge necessary for generalization across similar problems. Drawing insights from cognitive science, where humans employ high-level abstraction to simplify complex problems before delving into specifics, we introduce a novel self-training algorithm: LEarning to Plan before Answering (LEPA). LEPA trains the LLM to formulate anticipatory plans, which serve as abstract meta-knowledge for problem-solving, before engaging with the intricacies of problems. This approach not only outlines the solution generation path but also shields the LLM from the distraction of irrelevant details. During data generation, LEPA first crafts an anticipatory plan based on the problem, and then generates a solution that aligns with both the plan and the problem. LEPA refines the plan through self-reflection, aiming to acquire plans that are instrumental in yielding correct solutions. During model optimization, the LLM is trained to predict both the refined plans and the corresponding solutions. By efficiently extracting and utilizing the anticipatory plans, LEPA demonstrates remarkable superiority over conventional algorithms on various challenging natural language reasoning benchmarks.
Problem

Research questions and friction points this paper is trying to address.

How to teach LLMs abstract planning for problem-solving
Improving generalization with self-generated meta-knowledge
Enhancing reasoning via anticipatory plans and self-reflection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-training LLMs to generate abstract anticipatory plans
Refining plans through self-reflection for better solutions
Training LLMs to predict both plans and solutions
🔎 Similar Papers
No similar papers found.
J
Jin Zhang
Institute for Interdisciplinary Information Sciences, Tsinghua University, China
Flood Sung
Flood Sung
Moonshot AI
Foundation ModelsLLM/VLMAgentReinforcement LearningMeta Learning
Zhilin Yang
Zhilin Yang
Carnegie Mellon University
Deep LearningMachine LearningNatural Language Processing
Y
Yang Gao
Institute for Interdisciplinary Information Sciences, Tsinghua University, China
C
Chongjie Zhang
Washington University in St. Louis