SOCRATES: Simulation Optimization with Correlated Replicas and Adaptive Trajectory Evaluations

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Optimizing complex stochastic systems with prohibitively high sampling costs remains a fundamental challenge. Method: This paper proposes the first large language model (LLM)-integrated, two-stage automated stochastic optimization (SO) framework. In Stage I, an LLM parses system structure, performs causal discovery, and constructs an ensemble of digital twins. In Stage II, a surrogate-assisted, performance-trajectory-driven meta-optimization generates adaptive hybrid optimization policies capable of runtime dynamic evolution. Contribution/Results: The framework innovatively embeds LLMs into both causal inference and the meta-optimization feedback loop, enabling algorithm-level customization. Experiments demonstrate substantial reductions in sample complexity, along with improved optimization efficiency and robustness across diverse stochastic system domains.

Technology Category

Application Category

📝 Abstract
The field of simulation optimization (SO) encompasses various methods developed to optimize complex, expensive-to-sample stochastic systems. Established methods include, but are not limited to, ranking-and-selection for finite alternatives and surrogate-based methods for continuous domains, with broad applications in engineering and operations management. The recent advent of large language models (LLMs) offers a new paradigm for exploiting system structure and automating the strategic selection and composition of these established SO methods into a tailored optimization procedure. This work introduces SOCRATES (Simulation Optimization with Correlated Replicas and Adaptive Trajectory Evaluations), a novel two-stage procedure that leverages LLMs to automate the design of tailored SO algorithms. The first stage constructs an ensemble of digital replicas of the real system. An LLM is employed to implement causal discovery from a textual description of the system, generating a structural `skeleton' that guides the sample-efficient learning of the replicas. In the second stage, this replica ensemble is used as an inexpensive testbed to evaluate a set of baseline SO algorithms. An LLM then acts as a meta-optimizer, analyzing the performance trajectories of these algorithms to iteratively revise and compose a final, hybrid optimization schedule. This schedule is designed to be adaptive, with the ability to be updated during the final execution on the real system when the optimization performance deviates from expectations. By integrating LLM-driven reasoning with LLM-assisted trajectory-aware meta-optimization, SOCRATES creates an effective and sample-efficient solution for complex SO optimization problems.
Problem

Research questions and friction points this paper is trying to address.

Automates tailored simulation optimization using large language models
Generates digital replicas for sample-efficient algorithm evaluation
Creates adaptive hybrid optimization schedules via meta-optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs automate tailored simulation optimization algorithm design
Causal discovery from text builds structural replica skeletons
Meta-optimizer analyzes trajectories to compose hybrid schedules
🔎 Similar Papers
No similar papers found.