Hybrid Modeling, Sim-to-Real Reinforcement Learning, and Large Language Model Driven Control for Digital Twins

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the joint optimization of accuracy, adaptability, and interpretability in dynamic system modeling and control for digital twins. We propose a Hybrid Analytical Modeling (HAM) framework that synergistically integrates first-principles physics with data-driven learning, and—novelty—we embed large language models (LLMs) into the digital twin closed loop to enable interpretable human–AI collaborative decision-making. We comparatively evaluate model predictive control (MPC), deep reinforcement learning (RL), and LLM-based control on a micro-greenhouse testbed. Results show that HAM achieves the optimal trade-off among modeling accuracy, generalization capability, and computational efficiency; MPC exhibits superior robustness; RL demonstrates exceptional sim-to-real transfer adaptability; and the LLM-based controller significantly enhances control policy flexibility and interactive interpretability. This work advances the integration of mechanistic understanding, statistical learning, and linguistic reasoning in cyber-physical system control.

Technology Category

Application Category

📝 Abstract
This work investigates the use of digital twins for dynamical system modeling and control, integrating physics-based, data-driven, and hybrid approaches with both traditional and AI-driven controllers. Using a miniature greenhouse as a test platform, four predictive models Linear, Physics-Based Modeling (PBM), Long Short Term Memory (LSTM), and Hybrid Analysis and Modeling (HAM) are developed and compared under interpolation and extrapolation scenarios. Three control strategies Model Predictive Control (MPC), Reinforcement Learning (RL), and Large Language Model (LLM) based control are also implemented to assess trade-offs in precision, adaptability, and implementation effort. Results show that in modeling HAM provides the most balanced performance across accuracy, generalization, and computational efficiency, while LSTM achieves high precision at greater resource cost. Among controllers, MPC delivers robust and predictable performance, RL demonstrates strong adaptability, and LLM-based controllers offer flexible human-AI interaction when coupled with predictive tools.
Problem

Research questions and friction points this paper is trying to address.

Hybrid modeling balances accuracy and computational efficiency
Reinforcement learning enables adaptable control in digital twins
Large language models facilitate human-AI interaction for control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid modeling balances accuracy and computational efficiency
Reinforcement learning enables adaptable control strategies
Large language models facilitate human-AI interaction control
🔎 Similar Papers
No similar papers found.
A
Adil Rasheed
Department of Engineering Cybernetics, NTNU, Trondheim, Norway
O
Oscar Ravik
Department of Engineering Cybernetics, NTNU, Trondheim, Norway
Omer San
Omer San
Associate Professor, Mechanical and Aerospace Engineering, University of Tennessee
Fluid DynamicsNumerical MethodsData AssimilationMachine LearningDigital Twin