🤖 AI Summary
Existing simulation environment generation methods prioritize visual realism while neglecting task-logic diversity, limiting their ability to effectively evaluate the adaptability and planning robustness of embodied agents. To address this gap, this work proposes LogicEnvGen, the first framework to incorporate task execution logic into environment generation. Leveraging large language models, LogicEnvGen generates high-level decision-tree-based behavior plans in a top-down manner, synthesizes logical trajectories, and instantiates physically plausible and logically diverse test environments through heuristic optimization and constraint solving. We further introduce LogicEnvEval, the first benchmark specifically designed for evaluating logical diversity. Experimental results demonstrate that LogicEnvGen improves logical diversity by 1.04–2.61× over baseline methods and increases agent failure detection rates by 4.00%–68.00%.
📝 Abstract
Simulated environments play an essential role in embodied AI, functionally analogous to test cases in software engineering. However, existing environment generation methods often emphasize visual realism (e.g., object diversity and layout coherence), overlooking a crucial aspect: logical diversity from the testing perspective. This limits the comprehensive evaluation of agent adaptability and planning robustness in distinct simulated environments. To bridge this gap, we propose LogicEnvGen, a novel method driven by Large Language Models (LLMs) that adopts a top-down paradigm to generate logically diverse simulated environments as test cases for agents. Given an agent task, LogicEnvGen first analyzes its execution logic to construct decision-tree-structured behavior plans and then synthesizes a set of logical trajectories. Subsequently, it adopts a heuristic algorithm to refine the trajectory set, reducing redundant simulation. For each logical trajectory, which represents a potential task situation, LogicEnvGen correspondingly instantiates a concrete environment. Notably, it employs constraint solving for physical plausibility. Furthermore, we introduce LogicEnvEval, a novel benchmark comprising four quantitative metrics for environment evaluation. Experimental results verify the lack of logical diversity in baselines and demonstrate that LogicEnvGen achieves 1.04-2.61x greater diversity, significantly improving the performance in revealing agent faults by 4.00%-68.00%.