π€ AI Summary
To address the challenge that non-technical farmers face in operating heterogeneous agricultural robots and the high learning barrier of traditional programming interfaces, this paper proposes a zero-code, natural language (NL)-driven task planning system powered by large language models (LLMs). The system decomposes user NL instructions into subtasks, performs semantic mapping to predefined behavioral primitives, and generates coordinated execution sequences for wheeled robots, robotic arms, and vision systems, while supporting multimodal perception interfaces. It represents the first NL interface enabling end-to-end completion of complex field tasks by diverse agricultural robots. Evaluated in real-world farmland deployments, the system achieves over 92% task success rate and reduces average user task specification time by 87%. These results significantly lower the humanβrobot collaboration barrier and advance the inclusive, grassroots adoption of AI technologies in agriculture.
π Abstract
Artificial intelligence is transforming precision agriculture, offering farmers new tools to streamline their daily operations. While these technological advances promise increased efficiency, they often introduce additional complexity and steep learning curves that are particularly challenging for non-technical users who must balance tech adoption with existing workloads. In this paper, we present a natural language (NL) robotic mission planner that enables non-specialists to control heterogeneous robots through a common interface. By leveraging large language models (LLMs) and predefined primitives, our architecture seamlessly translates human language into intermediate descriptions that can be executed by different robotic platforms. With this system, users can formulate complex agricultural missions without writing any code. In the work presented in this paper, we extend our previous system tailored for wheeled robot mission planning through a new class of experiments involving robotic manipulation and computer vision tasks. Our results demonstrate that the architecture is both general enough to support a diverse set of robots and powerful enough to execute complex mission requests. This work represents a significant step toward making robotic automation in precision agriculture more accessible to non-technical users.