🤖 AI Summary
The absence of efficient and robust reinforcement learning (RL) benchmark environments hinders algorithm development for greenhouse crop production. Method: We introduce the first open-source RL simulation platform tailored for agricultural intelligent control. It features a differentiable C++ core implementing greenhouse dynamics, accelerated numerical integration and automatic differentiation via CasADi, and a modular Python interface supporting multi-task configuration, parametric uncertainty modeling, and standardized evaluation. Contributions/Results: (1) Simulation speed is 17× faster than pure Python implementations; (2) It enables, for the first time, end-to-end training of robust climate controllers (PPO/SAC) under parameter perturbations; (3) It provides a unified evaluation framework, filling a critical gap in standardized RL benchmarks for agriculture. The platform significantly lowers the barrier to algorithm validation and advances research and deployment of intelligent greenhouse control systems.
📝 Abstract
This study presents GreenLight-Gym, a new, fast, open-source benchmark environment for developing reinforcement learning (RL) methods in greenhouse crop production control. Built on the state-of-the-art GreenLight model, it features a differentiable C++ implementation leveraging the CasADi framework for efficient numerical integration. GreenLight-Gym improves simulation speed by a factor of 17 over the original GreenLight implementation. A modular Python environment wrapper enables flexible configuration of control tasks and RL-based controllers. This flexibility is demonstrated by learning controllers under parametric uncertainty using two well-known RL algorithms. GreenLight-Gym provides a standardized benchmark for advancing RL methodologies and evaluating greenhouse control solutions under diverse conditions. The greenhouse control community is encouraged to use and extend this benchmark to accelerate innovation in greenhouse crop production.