🤖 AI Summary
Existing large language models (LLMs) lack sufficient autonomous error correction for complex, long-horizon household robotic tasks, as their static self-reflection mechanisms fail to adapt to dynamic task difficulty and accumulated experience. Method: We propose the Flexible Constructive Reflection Framework (FCRF), a dual-role architecture comprising a Mentor and an Executor, enabling task-difficulty-aware dynamic reflection strategies. FCRF is the first to jointly integrate historical success cases and failure lessons for synergistic optimization, and incorporates environment feedback-driven iterative planning. Contribution/Results: Evaluated in both AlfWorld simulations and real-world physical settings, FCRF significantly improves task completion rates and reflection adaptability. Experiments demonstrate superior performance over state-of-the-art methods on high-complexity, long-horizon domestic service tasks, effectively enhancing LLMs’ continual learning capability and robustness in hierarchical planning.
📝 Abstract
Autonomous error correction is critical for domestic robots to achieve reliable execution of complex long-horizon tasks. Prior work has explored self-reflection in Large Language Models (LLMs) for task planning error correction; however, existing methods are constrained by inflexible self-reflection mechanisms that limit their effectiveness. Motivated by these limitations and inspired by human cognitive adaptation, we propose the Flexible Constructivism Reflection Framework (FCRF), a novel Mentor-Actor architecture that enables LLMs to perform flexible self-reflection based on task difficulty, while constructively integrating historical valuable experience with failure lessons. We evaluated FCRF on diverse domestic tasks through simulation in AlfWorld and physical deployment in the real-world environment. Experimental results demonstrate that FCRF significantly improves overall performance and self-reflection flexibility in complex long-horizon robotic tasks.