SciDesignBench: Benchmarking and Improving Language Models for Scientific Inverse Design

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of inverse design in scientific and engineering domains, where efficient search within combinatorially explosive design spaces is required to identify configurations meeting target performance criteria. The authors introduce SciDesignBench, the first cross-disciplinary, simulation-driven benchmark comprising 14 domains and 520 simulation tasks, to systematically evaluate large language models (LLMs) across five experimental settings. They propose Reinforcement Learning from Simulation Feedback (RLSF), a novel training paradigm that internalizes costly test-time optimization into model weights via simulation-based feedback. Experimental results show that a zero-shot 8B-parameter LLM achieves only a 29.0% single-trial success rate, whereas RLSF fine-tuning improves performance by 8–17 percentage points across three domains, substantially validating the efficacy of this approach.

Technology Category

Application Category

📝 Abstract
Many of the most important problems in science and engineering are inverse problems: given a desired outcome, find a design that achieves it. Evaluating whether a candidate meets the spec is often routine; a binding energy can be computed, a reactor yield simulated, a pharmacokinetic profile predicted. But searching a combinatorial design space for inputs that satisfy those targets is fundamentally harder. We introduce SciDesignBench, a benchmark of 520 simulator-grounded tasks across 14 scientific domains and five settings spanning single-shot design, short-horizon feedback, long-horizon refinement, and seed-design optimization. On the 10-domain shared-core subset, the best zero-shot model reaches only 29.0% success despite substantially higher parse rates. Simulator feedback helps, but the leaderboard changes with horizon: Sonnet 4.5 is strongest in one-turn de novo design, whereas Opus 4.6 is strongest after 20 turns of simulator-grounded refinement. Providing a starting seed design reshuffles the leaderboard again, demonstrating that constrained modification requires a fundamentally different capability from unconstrained de novo generation. We then introduce RLSF, a simulator-feedback training recipe. An RLSF-tuned 8B model raises single-turn success rates by 8-17 percentage points across three domains. Together, these results position simulator-grounded inverse design as both a benchmark for scientific reasoning and a practical substrate for amortizing expensive test-time compute into model weights.
Problem

Research questions and friction points this paper is trying to address.

inverse design
scientific reasoning
combinatorial design space
simulator-grounded tasks
design optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

inverse design
simulator-grounded benchmark
RLSF
scientific reasoning
language models