🤖 AI Summary
A persistent performance ceiling—where models converge during training yet fail to improve via zero-shot inference—plagues complex multi-agent reinforcement learning (MARL). To address this, we propose a budget-controllable inference-time optimization paradigm. Our core insight is the first systematic demonstration that inference-time policy refinement constitutes a critical, previously underexploited dimension for breaking RL performance bottlenecks. We design a lightweight, modular inference framework integrating Monte Carlo tree search, beam search, and policy reweighting, supporting distributed rollouts and adaptive budget allocation, with per-inference latency of only several seconds. Evaluated uniformly across 17 challenging MARL benchmarks, our method achieves an average +45% performance gain (up to +126%). Validated over 60,000 experiments, it demonstrates exceptional scalability—constituting, to date, the largest-scale and most empirically rigorous study of inference-time strategies in RL.
📝 Abstract
Reinforcement learning (RL) systems have countless applications, from energy-grid management to protein design. However, such real-world scenarios are often extremely difficult, combinatorial in nature, and require complex coordination between multiple agents. This level of complexity can cause even state-of-the-art RL systems, trained until convergence, to hit a performance ceiling which they are unable to break out of with zero-shot inference. Meanwhile, many digital or simulation-based applications allow for an inference phase that utilises a specific time and compute budget to explore multiple attempts before outputting a final solution. In this work, we show that such an inference phase employed at execution time, and the choice of a corresponding inference strategy, are key to breaking the performance ceiling observed in complex multi-agent RL problems. Our main result is striking: we can obtain up to a 126% and, on average, a 45% improvement over the previous state-of-the-art across 17 tasks, using only a couple seconds of extra wall-clock time during execution. We also demonstrate promising compute scaling properties, supported by over 60k experiments, making it the largest study on inference strategies for complex RL to date. Our experimental data and code are available at https://sites.google.com/view/inf-marl.