🤖 AI Summary
Large language models (LLMs) often misselect in-context examples for planning tasks due to superficial problem similarity, leading to erroneous reasoning.
Method: We propose GRASE-DC, an action sequence similarity (AS)-driven contextual learning optimization framework. GRASE-DC introduces the first AS-based example relevance metric and integrates two-stage resampling, dynamic clustering, and iterative validation (GRASE-DC*+VAL) to jointly enhance example relevance and diversity while enabling cross-distribution generalization.
Contributions/Results: Evaluated across multiple planning benchmarks, GRASE-DC achieves average accuracy gains of 11–40 percentage points and reduces example usage by 27.3%. In “explaining hard tasks via simple analogies” scenarios, it improves accuracy by up to 24 percentage points. The method significantly strengthens LLMs’ generalization capability and planning robustness, demonstrating strong transferability across diverse task distributions.
📝 Abstract
Planning is essential for artificial intelligence systems to look ahead and proactively determine a course of actions to reach objectives in the virtual and real world. Recent work on large language models (LLMs) sheds light on their planning capability in various tasks. However, it remains unclear what signals in the context influence the model performance. In this work, we explore how to improve the model planning capability through in-context learning (ICL), specifically, what signals can help select the exemplars. Through extensive experiments, we observe that commonly used problem similarity may result in false positives with drastically different plans, which can mislead the model. In response, we propose to sample and filter exemplars leveraging plan side action sequence similarity (AS). We propose GRASE-DC: a two-stage pipeline that first re-samples high AS exemplars and then curates the selected exemplars with dynamic clustering on AS to achieve a balance of relevance and diversity. Our experimental result confirms that GRASE-DC achieves significant performance improvement on various planning tasks (up to ~11-40 point absolute accuracy improvement with 27.3% fewer exemplars needed on average). With GRASE-DC* + VAL, where we iteratively apply GRASE-DC with a validator, we are able to even boost the performance by 18.9% more. Extensive analysis validates the consistent performance improvement of GRASE-DC with various backbone LLMs and on both classical planning and natural language planning benchmarks. GRASE-DC can further boost the planning accuracy by ~24 absolute points on harder problems using simpler problems as exemplars over a random baseline. This demonstrates its ability to generalize to out-of-distribution problems.