Sim2Act: Robust Simulation-to-Decision Learning via Adversarial Calibration and Group-Relative Perturbation

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of decision-making policies caused by prediction errors in simulators, which often arise from data noise or bias in critical regions. To mitigate this issue, the authors propose Sim2Act, a novel framework that uniquely integrates an adversarial calibration mechanism with a group-relative perturbation strategy. The former reweights simulation errors associated with critical state-action pairs, while the latter stabilizes policy learning without inducing excessive pessimism. By jointly enhancing the robustness of both the simulator and the learned policy, Sim2Act avoids the inadvertent suppression of high-risk, high-reward actions commonly eliminated by overly conservative regularization. Empirical evaluations across multiple supply chain benchmarks demonstrate that Sim2Act significantly improves simulation robustness and decision stability under both structured and unstructured perturbations.

Technology Category

Application Category

📝 Abstract
Simulation-to-decision learning enables safe policy training in digital environments without risking real-world deployment, and has become essential in mission-critical domains such as supply chains and industrial systems. However, simulators learned from noisy or biased real-world data often exhibit prediction errors in decision-critical regions, leading to unstable action ranking and unreliable policies. Existing approaches either focus on improving average simulation fidelity or adopt conservative regularization, which may cause policy collapse by discarding high-risk high-reward actions. We propose Sim2Act, a robust simulation-to-decision framework that addresses both simulator and policy robustness. First, we introduce an adversarial calibration mechanism that re-weights simulation errors in decision-critical state-action pairs to align surrogate fidelity with downstream decision impact. Second, we develop a group-relative perturbation strategy that stabilizes policy learning under simulator uncertainty without enforcing overly pessimistic constraints. Extensive experiments on multiple supply chain benchmarks demonstrate improved simulation robustness and more stable decision performance under structured and unstructured perturbations.
Problem

Research questions and friction points this paper is trying to address.

simulation-to-decision learning
simulator robustness
policy reliability
decision-critical regions
prediction errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial calibration
group-relative perturbation
simulation-to-decision learning
robust policy learning
decision-critical simulation
🔎 Similar Papers
No similar papers found.