🤖 AI Summary
To address ambiguity, hallucination, and logical inconsistencies arising from excessive reliance on large language models (LLMs) in modeling and simulation (M&S), this paper proposes a novel middleware paradigm wherein the LLM serves solely as a “semantic translator” rather than a decision-making agent, enabling high-fidelity interoperability across heterogeneous domain-specific tools. Methodologically, we integrate LoRA-based lightweight fine-tuning with M&S-oriented tool selection criteria to preserve tool autonomy while ensuring context-aware LLM assistance. We further design a semantic mapping middleware and structured API to support cross-tool command parsing and code generation across diverse formal modeling frameworks. Experimental results demonstrate a >30% reduction in modeling entry barriers, zero performance bottlenecks in typical M&S pipelines, and a 72% error-rate reduction compared to end-to-end LLM approaches.
📝 Abstract
Large Language Models (LLMs) offer transformative potential for Modeling&Simulation (M&S) through natural language interfaces that simplify workflows. However, over-reliance risks compromising quality due to ambiguities, logical shortcuts, and hallucinations. This paper advocates integrating LLMs as middleware or translators between specialized tools to mitigate complexity in M&S tasks. Acting as translators, LLMs can enhance interoperability across multi-formalism, multi-semantics, and multi-paradigm systems. We address two key challenges: identifying appropriate languages and tools for modeling and simulation tasks, and developing efficient software architectures that integrate LLMs without performance bottlenecks. To this end, the paper explores LLM-mediated workflows, emphasizes structured tool integration, and recommends Low-Rank Adaptation-based architectures for efficient task-specific adaptations. This approach ensures LLMs complement rather than replace specialized tools, fostering high-quality, reliable M&S processes.