Can LLMs Do Rocket Science? Exploring the Limits of Complex Reasoning with GTOC 12

📅 2026-01-08
🏛️ AIAA SCITECH 2026 Forum
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) can autonomously perform multi-stage planning in high-dimensional, physics-constrained aerospace missions, exemplified by the GTOC 12 asteroid mining competition. Leveraging the MLE-Bench framework and the AIDE agent architecture, the work presents the first application of LLMs to real-world orbital mechanics challenges, enabling autonomous generation and optimization of mission designs. It introduces an “LLM-as-a-Judge” evaluation paradigm that aligns with expert scoring criteria to assess strategic feasibility. Experimental results show that state-of-the-art models released in the past two years—such as GPT-4-Turbo, Gemini 2.5 Pro, and o3—achieve average scores rising from 9.3 to 17.2 out of 26. However, persistent errors in unit handling and boundary condition management reveal a significant gap between strategic reasoning and engineering implementation in current LLM capabilities.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable proficiency in code generation and general reasoning, yet their capacity for autonomous multi-stage planning in high-dimensional, physically constrained environments remains an open research question. This study investigates the limits of current AI agents by evaluating them against the 12th Global Trajectory Optimization Competition (GTOC 12), a complex astrodynamics challenge requiring the design of a large-scale asteroid mining campaign. We adapt the MLE-Bench framework to the domain of orbital mechanics and deploy an AIDE-based agent architecture to autonomously generate and refine mission solutions. To assess performance beyond binary validity, we employ an"LLM-as-a-Judge"methodology, utilizing a rubric developed by domain experts to evaluate strategic viability across five structural categories. A comparative analysis of models, ranging from GPT-4-Turbo to reasoning-enhanced architectures like Gemini 2.5 Pro, and o3, reveals a significant trend: the average strategic viability score has nearly doubled in the last two years (rising from 9.3 to 17.2 out of 26). However, we identify a critical capability gap between strategy and execution. While advanced models demonstrate sophisticated conceptual understanding, correctly framing objective functions and mission architectures, they consistently fail at implementation due to physical unit inconsistencies, boundary condition errors, and inefficient debugging loops. We conclude that, while current LLMs often demonstrate sufficient knowledge and intelligence to tackle space science tasks, they remain limited by an implementation barrier, functioning as powerful domain facilitators rather than fully autonomous engineers.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Autonomous Planning
Astrodynamics
Complex Reasoning
Implementation Gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Autonomous Planning
Orbital Mechanics
LLM-as-a-Judge
GTOC
🔎 Similar Papers
No similar papers found.
I
Inaki del Campo
Universidad Politécnica de Madrid (UPM), Spain
P
Pablo Cuervo
Universidad Politécnica de Madrid (UPM), Spain
Victor Rodriguez-Fernandez
Victor Rodriguez-Fernandez
Universidad Politécnica de Madrid
Deep learningMachine learningArtificial IntelligenceTime SeriesSpace & AI
Roberto Armellin
Roberto Armellin
The University of Auckland
AstrodynamicsSpace Situational AwarenessTrajectory OptimizationSpace Surveillance and Tracking
J
Jack Yarndley
University of Auckland (UoA), New Zealand