🤖 AI Summary
To address the challenge of jointly optimizing passenger waiting time and system efficiency under high supply-demand uncertainty in ride-hailing platforms, this paper formulates adaptive delayed matching as a state-aware, regime-aware Markov decision process that integrates spatiotemporal dynamics with traffic physics. We propose a self-attention-driven sparse Mixture-of-Experts (MoE) encoder enabling automatic expert specialization, coupled with a physics-constrained congestion surrogate model and an adaptive reward mechanism. Evaluated on real-world Uber trajectory data from San Francisco, our method achieves over 13% higher total reward compared to strong baselines, reduces matching and pickup delays by 10% and 15%, respectively, and attains state-of-the-art performance with only 12M parameters. The approach demonstrates strong cross-scenario robustness and training stability.
📝 Abstract
Ride-hailing platforms face the challenge of balancing passenger waiting times with overall system efficiency under highly uncertain supply-demand conditions. Adaptive delayed matching creates a trade-off between matching and pickup delays by deciding whether to assign drivers immediately or batch requests. Since outcomes accumulate over long horizons with stochastic dynamics, reinforcement learning (RL) is a suitable framework. However, existing approaches often oversimplify traffic dynamics or use shallow encoders that miss complex spatiotemporal patterns.
We introduce the Regime-Aware Spatio-Temporal Mixture-of-Experts (RAST-MoE), which formalizes adaptive delayed matching as a regime-aware MDP equipped with a self-attention MoE encoder. Unlike monolithic networks, our experts specialize automatically, improving representation capacity while maintaining computational efficiency. A physics-informed congestion surrogate preserves realistic density-speed feedback, enabling millions of efficient rollouts, while an adaptive reward scheme guards against pathological strategies.
With only 12M parameters, our framework outperforms strong baselines. On real-world Uber trajectory data (San Francisco), it improves total reward by over 13%, reducing average matching and pickup delays by 10% and 15% respectively. It demonstrates robustness across unseen demand regimes and stable training. These findings highlight the potential of MoE-enhanced RL for large-scale decision-making with complex spatiotemporal dynamics.