🤖 AI Summary
This study investigates whether large language models (LLMs) can autonomously perform multi-stage planning in high-dimensional, physics-constrained aerospace missions, exemplified by the GTOC 12 asteroid mining competition. Leveraging the MLE-Bench framework and the AIDE agent architecture, the work presents the first application of LLMs to real-world orbital mechanics challenges, enabling autonomous generation and optimization of mission designs. It introduces an “LLM-as-a-Judge” evaluation paradigm that aligns with expert scoring criteria to assess strategic feasibility. Experimental results show that state-of-the-art models released in the past two years—such as GPT-4-Turbo, Gemini 2.5 Pro, and o3—achieve average scores rising from 9.3 to 17.2 out of 26. However, persistent errors in unit handling and boundary condition management reveal a significant gap between strategic reasoning and engineering implementation in current LLM capabilities.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable proficiency in code generation and general reasoning, yet their capacity for autonomous multi-stage planning in high-dimensional, physically constrained environments remains an open research question. This study investigates the limits of current AI agents by evaluating them against the 12th Global Trajectory Optimization Competition (GTOC 12), a complex astrodynamics challenge requiring the design of a large-scale asteroid mining campaign. We adapt the MLE-Bench framework to the domain of orbital mechanics and deploy an AIDE-based agent architecture to autonomously generate and refine mission solutions. To assess performance beyond binary validity, we employ an"LLM-as-a-Judge"methodology, utilizing a rubric developed by domain experts to evaluate strategic viability across five structural categories. A comparative analysis of models, ranging from GPT-4-Turbo to reasoning-enhanced architectures like Gemini 2.5 Pro, and o3, reveals a significant trend: the average strategic viability score has nearly doubled in the last two years (rising from 9.3 to 17.2 out of 26). However, we identify a critical capability gap between strategy and execution. While advanced models demonstrate sophisticated conceptual understanding, correctly framing objective functions and mission architectures, they consistently fail at implementation due to physical unit inconsistencies, boundary condition errors, and inefficient debugging loops. We conclude that, while current LLMs often demonstrate sufficient knowledge and intelligence to tackle space science tasks, they remain limited by an implementation barrier, functioning as powerful domain facilitators rather than fully autonomous engineers.