🤖 AI Summary
To address the labor-intensive manual operations and insufficient scenario coverage in small Unmanned Aerial Systems (sUAS) flight testing, this paper proposes AutoSimTest—a novel end-to-end simulation testing framework powered by collaborative multi-role Large Language Model (LLM) agents. The framework enables natural-language-driven generation of diverse tasks and environments, cross-platform (Gazebo/JSBSim) and cross-flight-controller (PX4/ArduPilot) automatic configuration, executable script compilation, and semantic result analysis, all accessible via a web-based interactive visualization interface. It introduces the first LLM-based multi-agent architecture for simulation testing, achieving fully automated test-loop closure and semantic human–system interaction. Experimental evaluation demonstrates a scene auto-generation rate exceeding 90%, a 75% reduction in developer manual effort, and a threefold increase in test scenario diversity.
📝 Abstract
Thorough simulation testing is crucial for validating the correct behavior of small Uncrewed Aerial Systems (sUAS) across multiple scenarios, including adverse weather conditions (such as wind, and fog), diverse settings (hilly terrain, or urban areas), and varying mission profiles (surveillance, tracking). While various sUAS simulation tools exist to support developers, the entire process of creating, executing, and analyzing simulation tests remains a largely manual and cumbersome task. Developers must identify test scenarios, set up the simulation environment, integrate the System under Test (SuT) with simulation tools, formulate mission plans, and collect and analyze results. These labor-intensive tasks limit the ability of developers to conduct exhaustive testing across a wide range of scenarios. To alleviate this problem, in this paper, we propose AutoSimTest, a Large Language Model (LLM)-driven framework, where multiple LLM agents collaborate to support the sUAS simulation testing process. This includes: (1) creating test scenarios that subject the SuT to unique environmental contexts; (2) preparing the simulation environment as per the test scenario; (3) generating diverse sUAS missions for the SuT to execute; and (4) analyzing simulation results and providing an interactive analytics interface. Further, the design of the framework is flexible for creating and testing scenarios for a variety of sUAS use cases, simulation tools, and SuT input requirements. We evaluated our approach by (a) conducting simulation testing of PX4 and ArduPilot flight-controller-based SuTs, (b) analyzing the performance of each agent, and (c) gathering feedback from sUAS developers. Our findings indicate that AutoSimTest significantly improves the efficiency and scope of the sUAS testing process, allowing for more comprehensive and varied scenario evaluations while reducing the manual effort.