Optimization Modeling via Semantic Anchored Alignment

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based optimization modeling approaches are solver-driven, relying on single-shot code generation and limited error feedback, which hinders detection of logical correctness and often yields syntactically valid but semantically incorrect models. Method: We propose SAC-Opt, the first framework featuring semantic anchoring alignment—a mechanism that explicitly establishes fine-grained anchors between natural language descriptions and mathematical semantic structures (e.g., variables, constraints, objectives), enabling iterative, localized corrections without additional training and optimizing for semantic consistency rather than solver output. Contribution/Results: SAC-Opt significantly improves logical fidelity: it achieves an average modeling accuracy gain of 7.8% across seven public benchmarks, and a 21.9% improvement on ComplexLP—demonstrating high-fidelity translation from natural language to executable optimization code.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have opened new paradigms in optimization modeling by enabling the generation of executable solver code from natural language descriptions. Despite this promise, existing approaches typically remain solver-driven: they rely on single-pass forward generation and apply limited post-hoc fixes based on solver error messages, leaving undetected semantic errors that silently produce syntactically correct but logically flawed models. To address this challenge, we propose SAC-Opt, a backward-guided correction framework that grounds optimization modeling in problem semantics rather than solver feedback. At each step, SAC-Opt aligns the original semantic anchors with those reconstructed from the generated code and selectively corrects only the mismatched components, driving convergence toward a semantically faithful model. This anchor-driven correction enables fine-grained refinement of constraint and objective logic, enhancing both fidelity and robustness without requiring additional training or supervision. Empirical results on seven public datasets demonstrate that SAC-Opt improves average modeling accuracy by 7.8%, with gains of up to 21.9% on the ComplexLP dataset. These findings highlight the importance of semantic-anchored correction in LLM-based optimization workflows to ensure faithful translation from problem intent to solver-executable code.
Problem

Research questions and friction points this paper is trying to address.

Addressing semantic errors in LLM-generated optimization code
Ensuring logical fidelity between natural language and solver models
Correcting mismatched components through semantic anchor alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Backward correction framework grounds modeling in semantics
Aligns original semantic anchors with reconstructed code anchors
Selectively corrects mismatched components for semantic fidelity
🔎 Similar Papers
No similar papers found.
Y
Yansen Zhang
Department of Computer Science, City University of Hong Kong
Q
Qingcan Kang
Huawei Noah’s Ark Lab
Y
Yujie Chen
Huawei’s Supply Chain Management Department
Y
Yufei Wang
Huawei Noah’s Ark Lab
Xiongwei Han
Xiongwei Han
AI&OR Principal Researcher at Noah's Ark Lab, Huawei
Intelligence ModelingLLMs for OR
T
Tao Zhong
Huawei Noah’s Ark Lab
M
Mingxuan Yuan
Huawei Noah’s Ark Lab
C
Chen Ma
Department of Computer Science, City University of Hong Kong