Large Language Models as Optimization Controllers: Adaptive Continuation for SIMP Topology Optimization

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of conventional SIMP-based topology optimization, which relies on fixed parameter schedules and struggles to adapt to dynamic changes during the optimization process, thereby compromising convergence efficiency and solution quality. To overcome this, the study introduces a large language model (LLM) as an online adaptive controller that dynamically adjusts key SIMP parameters in real time based on the current optimization state. The proposed framework incorporates a state-aware parameter modulation mechanism and a grayscale exponent gating strategy, complemented by a meta-optimization loop that automatically tunes the control frequency and decision thresholds. Evaluated on both 2D and 3D benchmark problems, the method consistently achieves the lowest final compliance—improving upon fixed-schedule baselines by 5.7% to 18.1%—while producing fully binary solutions without intermediate gray regions.

Technology Category

Application Category

📝 Abstract
We present a framework in which a large language model (LLM) acts as an online adaptive controller for SIMP topology optimization, replacing conventional fixed-schedule continuation with real-time, state-conditioned parameter decisions. At every $k$-th iteration, the LLM receives a structured observation$-$current compliance, grayness index, stagnation counter, checkerboard measure, volume fraction, and budget consumption$-$and outputs numerical values for the penalization exponent $p$, projection sharpness $β$, filter radius $r_{\min}$, and move limit $δ$ via a Direct Numeric Control interface. A hard grayness gate prevents premature binarization, and a meta-optimization loop uses a second LLM pass to tune the agent's call frequency and gate threshold across runs. We benchmark the agent against four baselines$-$fixed (no-continuation), standard three-field continuation, an expert heuristic, and a schedule-only ablation$-$on three 2-D problems (cantilever, MBB beam, L-bracket) at $120\!\times\!60$ resolution and two 3-D problems (cantilever, MBB beam) at $40\!\times\!20\!\times\!10$ resolution, all run for 300 iterations. A standardized 40-iteration sharpening tail is applied from the best valid snapshot so that compliance differences reflect only the exploration phase. The LLM agent achieves the lowest final compliance on every benchmark: $-5.7\%$ to $-18.1\%$ relative to the fixed baseline, with all solutions fully binary. The schedule-only ablation underperforms the fixed baseline on two of three problems, confirming that the LLM's real-time intervention$-$not the schedule geometry$-$drives the gain. Code and reproduction scripts will be released upon publication.
Problem

Research questions and friction points this paper is trying to address.

topology optimization
SIMP
adaptive control
parameter scheduling
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Topology Optimization
Adaptive Control
SIMP
Real-time Parameter Tuning
🔎 Similar Papers
No similar papers found.