🤖 AI Summary
Design Structure Matrix (DSM) sequencing is an NP-hard combinatorial optimization problem aimed at minimizing feedback loops to enhance modularity and process efficiency.
Method: This paper introduces, for the first time, large language models (LLMs) into DSM optimization. We propose a structured prompting framework that integrates network topology encoding with domain-specific knowledge injection, enabling iterative reasoning within the LLM optimizer—thereby overcoming semantic modeling and contextual understanding limitations inherent in conventional mathematical heuristics.
Results: Evaluated on diverse real-world and synthetic DSM instances, our approach significantly outperforms random and deterministic baselines: convergence speed improves by 37%–62%, average feedback loop count decreases by 28.5%, and performance gains are model-agnostic. These results validate the effectiveness and generalizability of LLM-driven, domain-aware optimization.
📝 Abstract
In complex engineering systems, the interdependencies among components or development activities are often modeled and analyzed using Design Structure Matrix (DSM). Reorganizing elements within a DSM to minimize feedback loops and enhance modularity or process efficiency constitutes a challenging combinatorial optimization (CO) problem in engineering design and operations. As problem sizes increase and dependency networks become more intricate, traditional optimization methods that solely use mathematical heuristics often fail to capture the contextual nuances and struggle to deliver effective solutions. In this study, we explore the potential of Large Language Models (LLMs) for helping solve such CO problems by leveraging their capabilities for advanced reasoning and contextual understanding. We propose a novel LLM-based framework that integrates network topology with contextual domain knowledge for iterative optimization of DSM element sequencing - a common CO problem. Experiments on various DSM cases show that our method consistently achieves faster convergence and superior solution quality compared to both stochastic and deterministic baselines. Notably, we find that incorporating contextual domain knowledge significantly enhances optimization performance regardless of the chosen LLM backbone. These findings highlight the potential of LLMs to solve complex engineering CO problems by combining semantic and mathematical reasoning. This approach paves the way towards a new paradigm in LLM-based engineering design optimization.