🤖 AI Summary
Existing LLM self-training paradigms tend to generate detail-oriented synthetic data, hindering the extraction of generalizable, abstract meta-knowledge. Method: This paper proposes LEPA—the first approach to incorporate high-level abstract planning from cognitive science into LLM post-training. LEPA prompts the model to first generate prospective, task-agnostic abstract plans, then solve problems grounded in those plans; joint plan–solution modeling and self-reflective refinement enable co-optimization. The method employs a two-stage self-generated data training framework. Contribution/Results: On multiple natural language inference benchmarks, LEPA significantly outperforms state-of-the-art self-training methods, consistently improving both generalization and reasoning robustness. Empirical results validate that abstract planning critically enhances LLMs’ capacity for meta-knowledge transfer.
📝 Abstract
In the field of large language model (LLM) post-training, the effectiveness of utilizing synthetic data generated by the LLM itself has been well-presented. However, a key question remains unaddressed: what essential information should such self-generated data encapsulate? Existing approaches only produce step-by-step problem solutions, and fail to capture the abstract meta-knowledge necessary for generalization across similar problems. Drawing insights from cognitive science, where humans employ high-level abstraction to simplify complex problems before delving into specifics, we introduce a novel self-training algorithm: LEarning to Plan before Answering (LEPA). LEPA trains the LLM to formulate anticipatory plans, which serve as abstract meta-knowledge for problem-solving, before engaging with the intricacies of problems. This approach not only outlines the solution generation path but also shields the LLM from the distraction of irrelevant details. During data generation, LEPA first crafts an anticipatory plan based on the problem, and then generates a solution that aligns with both the plan and the problem. LEPA refines the plan through self-reflection, aiming to acquire plans that are instrumental in yielding correct solutions. During model optimization, the LLM is trained to predict both the refined plans and the corresponding solutions. By efficiently extracting and utilizing the anticipatory plans, LEPA demonstrates remarkable superiority over conventional algorithms on various challenging natural language reasoning benchmarks.