Symmetry-Aware Transformer Training for Automated Planning

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In automated planning, decoder-only Transformers (e.g., PlanGPT) exhibit poor generalization from simple to complex problems, primarily due to symmetry in planning representations—equivalent states can be generated via arbitrary variable permutations, causing combinatorial explosion and preventing models from recognizing semantic equivalence. To address this, we propose a symmetry-aware training framework: (i) a contrastive learning objective that explicitly enforces invariance under variable permutations; and (ii) joint optimization of input encoding and attention mechanisms to mitigate representation ambiguity induced by symmetry. Our method requires no modification to the inference pipeline, yet significantly improves generalization in both plan generation and heuristic estimation. Evaluated across multiple standard planning domains, it substantially outperforms PlanGPT, empirically validating that explicit symmetry modeling is critical for generalization in neural planning.

Technology Category

Application Category

📝 Abstract
While transformers excel in many settings, their application in the field of automated planning is limited. Prior work like PlanGPT, a state-of-the-art decoder-only transformer, struggles with extrapolation from easy to hard planning problems. This in turn stems from problem symmetries: planning tasks can be represented with arbitrary variable names that carry no meaning beyond being identifiers. This causes a combinatorial explosion of equivalent representations that pure transformers cannot efficiently learn from. We propose a novel contrastive learning objective to make transformers symmetry-aware and thereby compensate for their lack of inductive bias. Combining this with architectural improvements, we show that transformers can be efficiently trained for either plan-generation or heuristic-prediction. Our results across multiple planning domains demonstrate that our symmetry-aware training effectively and efficiently addresses the limitations of PlanGPT.
Problem

Research questions and friction points this paper is trying to address.

Transformers struggle with automated planning extrapolation
Problem symmetries cause combinatorial representation explosion
Lack of inductive bias in pure transformers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive learning for symmetry awareness
Architectural improvements for transformers
Plan-generation and heuristic-prediction training
🔎 Similar Papers
No similar papers found.