🤖 AI Summary
This study addresses practical bottlenecks in deploying reinforcement learning (RL) for intelligent greenhouse climate control—namely, low training efficiency, strong dependence on initial policies, and imperfect grower inputs. We propose a human-in-the-loop interactive RL framework tailored to greenhouse operations. Methodologically, we design three novel interactive RL algorithms specifically adapted to horticultural control tasks; uncover, for the first time, the intrinsic tension between growers’ preference feedback and operational constraints; and introduce a neural network robustness enhancement technique to ensure stable learning under limited or noisy human input. In digital twin simulations, policy shaping and control sharing improve net profit by 8.4% and 6.8%, respectively, whereas reward shaping degrades performance by 9.4% due to sensitivity to imperfect input—demonstrating that mechanism selection critically determines efficacy under realistic human feedback conditions.
📝 Abstract
Climate control is crucial for greenhouse production as it directly affects crop growth and resource use. Reinforcement learning (RL) has received increasing attention in this field, but still faces challenges, including limited training efficiency and high reliance on initial learning conditions. Interactive RL, which combines human (grower) input with the RL agent's learning, offers a potential solution to overcome these challenges. However, interactive RL has not yet been applied to greenhouse climate control and may face challenges related to imperfect inputs. Therefore, this paper aims to explore the possibility and performance of applying interactive RL with imperfect inputs into greenhouse climate control, by: (1) developing three representative interactive RL algorithms tailored for greenhouse climate control (reward shaping, policy shaping and control sharing); (2) analyzing how input characteristics are often contradicting, and how the trade-offs between them make grower's inputs difficult to perfect; (3) proposing a neural network-based approach to enhance the robustness of interactive RL agents under limited input availability; (4) conducting a comprehensive evaluation of the three interactive RL algorithms with imperfect inputs in a simulated greenhouse environment. The demonstration shows that interactive RL incorporating imperfect grower inputs has the potential to improve the performance of the RL agent. RL algorithms that influence action selection, such as policy shaping and control sharing, perform better when dealing with imperfect inputs, achieving 8.4% and 6.8% improvement in profit, respectively. In contrast, reward shaping, an algorithm that manipulates the reward function, is sensitive to imperfect inputs and leads to a 9.4% decrease in profit. This highlights the importance of selecting an appropriate mechanism when incorporating imperfect inputs.