🤖 AI Summary
Dynamic adjustment of driver acceptance rates for discounted ride-hailing orders must balance demand expansion, matching efficiency, and platform profitability—challenged by data scarcity and opaque matching mechanisms. Method: We propose pi-DDPG, an online reinforcement learning framework operating in continuous action space. It incorporates a novel policy-refinement module to accelerate cold-start convergence, employs ConvLSTM to jointly model spatiotemporal supply-demand dynamics, and introduces a tailored prioritized experience replay mechanism. Contribution/Results: Evaluated on realistic simulation driven by real-world data, pi-DDPG significantly improves training stability and convergence speed: initial loss decreases by 37.2%, and order matching success rate increases by 12.8%—outperforming baseline DDPG and PPO. The framework provides a deployable decision-support solution for dynamic pricing and fleet dispatch in ride-hailing platforms.
📝 Abstract
The rapid expansion of platform integration has emerged as an effective solution to mitigate market fragmentation by consolidating multiple ride-hailing platforms into a single application. To address heterogeneous passenger preferences, third-party integrators provide Discount Express service delivered by express drivers at lower trip fares. For the individual platform, encouraging broader participation of drivers in Discount Express services has the potential to expand the accessible demand pool and improve matching efficiency, but often at the cost of reduced profit margins. This study aims to dynamically manage drivers' acceptance of Discount Express from the perspective of individual platforms. The lack of historical data under the new business model necessitates online learning. However, early-stage exploration through trial and error can be costly in practice, highlighting the need for reliable early-stage performance in real-world deployment. To address these challenges, this study formulates the decision regarding the proportion of drivers' acceptance behavior as a continuous control task. In response to the high stochasticity, the opaque matching mechanisms employed by third-party integrator, and the limited availability of historical data, we propose a policy-improved deep deterministic policy gradient (pi-DDPG) framework. The proposed framework incorporates a refiner module to boost policy performance during the early training phase, leverages a convolutional long short-term memory network to effectively capture complex spatiotemporal patterns, and adopts a prioritized experience replay mechanism to enhance learning efficiency. A simulator based on a real-world dataset is developed to validate the effectiveness of the proposed pi-DDPG. Numerical experiments demonstrate that pi-DDPG achieves superior learning efficiency and significantly reduces early-stage training losses.