Large language model-empowered next-generation computer-aided engineering

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Data-agnostic model order reduction (MOR) for large-scale, high-dimensional systems in computer-aided engineering (CAE) faces severe computational bottlenecks and lacks automation support. Method: This paper proposes the first large language model (LLM)-empowered intelligent agent framework for CAE, integrating LLMs with tensor decomposition-based prior surrogate models (TAPS), symbolic reasoning, and code refactoring to enable end-to-end generation of high-fidelity reduced-order solvers directly from natural-language descriptions of parametric partial differential equations. Contribution/Results: It pioneers the integration of LLMs into data-agnostic MOR, enabling autonomous construction and generalizable synthesis of solvers for nonlinear, high-dimensional parametric problems. Experiments demonstrate substantial reductions in manual modeling effort, improved MOR efficiency, and enhanced cross-scenario adaptability—advancing CAE toward natural-language-driven interaction and autonomous optimization paradigms.

Technology Category

Application Category

📝 Abstract
Software development has entered a new era where large language models (LLMs) now serve as general-purpose reasoning engines, enabling natural language interaction and transformative applications across diverse domains. This paradigm is now extending into computer-aided engineering (CAE). Recent applications of LLMs in CAE have successfully automated routine tasks, including CAD model generation and FEM simulations. Nevertheless, these contributions, which primarily serve to reduce manual labor, are often insufficient for addressing the significant computational challenges posed by large-scale, high-dimensional systems. To this aim, we first introduce the concept of LLM-empowered CAE agent, where LLMs act as autonomous collaborators that plan, execute, and adapt CAE workflows. Then, we propose an LLM-empowered CAE agent for data-free model order reduction (MOR), a powerful yet underused approach for ultra-fast large-scale parametric analysis due to the intrusive nature and labor-intensive redevelopment of solvers. LLMs can alleviate this barrier by automating derivations, code restructuring, and implementation, making intrusive MOR both practical and broadly accessible. To demonstrate feasibility, we present an LLM-empowered CAE agent for solving ultra-large-scale space-parameter-time (S-P-T) physical problems using Tensor-decomposition-based A Priori Surrogates (TAPS). Our results show that natural language prompts describing parametric partial differential equations (PDEs) can be translated into efficient solver implementations, substantially reducing human effort while producing high-fidelity reduced-order models. Moreover, LLMs can synthesize novel MOR solvers for unseen cases such as nonlinear and high-dimensional parametric problems based on their internal knowledge base. This highlights the potential of LLMs to establish the foundation for next-generation CAE systems.
Problem

Research questions and friction points this paper is trying to address.

Automating model order reduction for large-scale parametric analysis
Reducing manual effort in solver development via LLM automation
Enabling natural language to solver translation for complex PDEs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-empowered autonomous CAE workflow agents
Automated intrusive model order reduction implementation
Natural language to solver code translation
🔎 Similar Papers
No similar papers found.