Text2Model: Generating dynamic chemical reactor models using large language models (LLMs)

📅 2025-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the need for automated dynamic modeling of chemical reactors in process engineering. Method: We propose the first LLM fine-tuning framework tailored for dynamic chemical reactor modeling, enabling end-to-end generation of executable Modelica simulation code from natural language descriptions. Built upon the Llama 3.1 8B Instruct architecture, our approach integrates synthetic data fine-tuning, Modelica-specific prompt engineering, and a dual-dimension human evaluation framework assessing both syntactic correctness and semantic fidelity. Contribution/Results: To our knowledge, this is the first effort to adapt LLMs to dynamic model code generation in process industries, establishing a domain-specific code generation paradigm. Experiments show that the fine-tuned model achieves +37% improvement in semantic accuracy and +42% in syntactic correctness over generic baselines. While its zero-shot generalization lags slightly behind GPT-4o, it significantly outperforms general-purpose models in domain adaptation efficiency and modeling precision.

Technology Category

Application Category

📝 Abstract
As large language models have shown remarkable capabilities in conversing via natural language, the question arises as to how LLMs could potentially assist chemical engineers in research and industry with domain-specific tasks. We generate dynamic chemical reactor models in Modelica code format from textual descriptions as user input. We fine-tune Llama 3.1 8B Instruct on synthetically generated Modelica code for different reactor scenarios. We compare the performance of our fine-tuned model to the baseline Llama 3.1 8B Instruct model and GPT4o. We manually assess the models' predictions regarding the syntactic and semantic accuracy of the generated dynamic models. We find that considerable improvements are achieved by the fine-tuned model with respect to both the semantic and the syntactic accuracy of the Modelica models. However, the fine-tuned model lacks a satisfactory ability to generalize to unseen scenarios compared to GPT4o.
Problem

Research questions and friction points this paper is trying to address.

Generating dynamic chemical reactor models from text using LLMs
Fine-tuning Llama 3.1 8B Instruct for Modelica code generation
Assessing syntactic and semantic accuracy of generated reactor models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates Modelica code from text using LLMs
Fine-tunes Llama 3.1 for chemical reactor models
Compares performance with baseline and GPT4o